Test Report: KVM_Linux_crio 19690

                    
                      f8db61c9b74e1fc8d4208c01add19855c5953b45:2024-09-23:36339
                    
                

Test fail (12/274)

x
+
TestAddons/parallel/Registry (75.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 3.935768ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
I0923 12:38:39.518779  669447 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 12:38:39.518799  669447 kapi.go:107] duration metric: took 6.581489ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:344: "registry-66c9cd494c-srklj" [ca56f86a-1049-47d9-b11b-9f492f1f0e5a] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.114935448s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xmmdr" [cf74bb33-75e5-4844-a3a8-fc698241ea5c] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003804662s
addons_test.go:338: (dbg) Run:  kubectl --context addons-052630 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-052630 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-052630 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.093377574s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-052630 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-052630 ip
2024/09/23 12:39:51 [DEBUG] GET http://192.168.39.225:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-052630 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-052630 -n addons-052630
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-052630 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-052630 logs -n 25: (1.59283426s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-832165 | jenkins | v1.34.0 | 23 Sep 24 12:27 UTC |                     |
	|         | -p download-only-832165              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC | 23 Sep 24 12:28 UTC |
	| delete  | -p download-only-832165              | download-only-832165 | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC | 23 Sep 24 12:28 UTC |
	| start   | -o=json --download-only              | download-only-473947 | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC |                     |
	|         | -p download-only-473947              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC | 23 Sep 24 12:28 UTC |
	| delete  | -p download-only-473947              | download-only-473947 | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC | 23 Sep 24 12:28 UTC |
	| delete  | -p download-only-832165              | download-only-832165 | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC | 23 Sep 24 12:28 UTC |
	| delete  | -p download-only-473947              | download-only-473947 | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC | 23 Sep 24 12:28 UTC |
	| start   | --download-only -p                   | binary-mirror-529103 | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC |                     |
	|         | binary-mirror-529103                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:35373               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-529103              | binary-mirror-529103 | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC | 23 Sep 24 12:28 UTC |
	| addons  | disable dashboard -p                 | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC |                     |
	|         | addons-052630                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC |                     |
	|         | addons-052630                        |                      |         |         |                     |                     |
	| start   | -p addons-052630 --wait=true         | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC | 23 Sep 24 12:30 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:38 UTC | 23 Sep 24 12:38 UTC |
	|         | -p addons-052630                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-052630 addons disable         | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:38 UTC | 23 Sep 24 12:38 UTC |
	|         | headlamp --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC | 23 Sep 24 12:39 UTC |
	|         | addons-052630                        |                      |         |         |                     |                     |
	| ssh     | addons-052630 ssh curl -s            | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                      |         |         |                     |                     |
	|         | nginx.example.com'                   |                      |         |         |                     |                     |
	| addons  | addons-052630 addons                 | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC | 23 Sep 24 12:39 UTC |
	|         | disable csi-hostpath-driver          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-052630 addons                 | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC | 23 Sep 24 12:39 UTC |
	|         | disable volumesnapshots              |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC | 23 Sep 24 12:39 UTC |
	|         | -p addons-052630                     |                      |         |         |                     |                     |
	| addons  | addons-052630 addons disable         | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC | 23 Sep 24 12:39 UTC |
	|         | yakd --alsologtostderr -v=1          |                      |         |         |                     |                     |
	| ip      | addons-052630 ip                     | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC | 23 Sep 24 12:39 UTC |
	| addons  | addons-052630 addons disable         | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC | 23 Sep 24 12:39 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 12:28:24
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 12:28:24.813371  670144 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:28:24.813646  670144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:28:24.813655  670144 out.go:358] Setting ErrFile to fd 2...
	I0923 12:28:24.813660  670144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:28:24.813860  670144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-662205/.minikube/bin
	I0923 12:28:24.814564  670144 out.go:352] Setting JSON to false
	I0923 12:28:24.815641  670144 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7848,"bootTime":1727086657,"procs":321,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 12:28:24.815741  670144 start.go:139] virtualization: kvm guest
	I0923 12:28:24.818077  670144 out.go:177] * [addons-052630] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 12:28:24.819427  670144 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 12:28:24.819496  670144 notify.go:220] Checking for updates...
	I0923 12:28:24.821743  670144 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:28:24.823109  670144 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 12:28:24.824398  670144 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:28:24.825560  670144 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 12:28:24.826608  670144 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 12:28:24.827862  670144 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:28:24.861163  670144 out.go:177] * Using the kvm2 driver based on user configuration
	I0923 12:28:24.862619  670144 start.go:297] selected driver: kvm2
	I0923 12:28:24.862645  670144 start.go:901] validating driver "kvm2" against <nil>
	I0923 12:28:24.862661  670144 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 12:28:24.863497  670144 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:28:24.863608  670144 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19690-662205/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 12:28:24.879912  670144 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 12:28:24.879978  670144 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 12:28:24.880260  670144 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:28:24.880303  670144 cni.go:84] Creating CNI manager for ""
	I0923 12:28:24.880362  670144 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 12:28:24.880373  670144 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 12:28:24.880464  670144 start.go:340] cluster config:
	{Name:addons-052630 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-052630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:28:24.880601  670144 iso.go:125] acquiring lock: {Name:mkb968a95eae3838cd5c328cf3385c2ef4ff2c8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:28:24.882416  670144 out.go:177] * Starting "addons-052630" primary control-plane node in "addons-052630" cluster
	I0923 12:28:24.883605  670144 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 12:28:24.883654  670144 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 12:28:24.883668  670144 cache.go:56] Caching tarball of preloaded images
	I0923 12:28:24.883756  670144 preload.go:172] Found /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 12:28:24.883772  670144 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 12:28:24.884127  670144 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/config.json ...
	I0923 12:28:24.884158  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/config.json: {Name:mk8f8b007c3bc269ac83b2216416a2c7aa34749b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:24.884352  670144 start.go:360] acquireMachinesLock for addons-052630: {Name:mka98570d4b4becad22300323f1f88e64743eec3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 12:28:24.884434  670144 start.go:364] duration metric: took 46.812µs to acquireMachinesLock for "addons-052630"
	I0923 12:28:24.884466  670144 start.go:93] Provisioning new machine with config: &{Name:addons-052630 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-052630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:28:24.884576  670144 start.go:125] createHost starting for "" (driver="kvm2")
	I0923 12:28:24.886275  670144 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0923 12:28:24.886477  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:28:24.886532  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:28:24.901608  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35689
	I0923 12:28:24.902121  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:28:24.902783  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:28:24.902809  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:28:24.903341  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:28:24.903572  670144 main.go:141] libmachine: (addons-052630) Calling .GetMachineName
	I0923 12:28:24.903730  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:28:24.903901  670144 start.go:159] libmachine.API.Create for "addons-052630" (driver="kvm2")
	I0923 12:28:24.903933  670144 client.go:168] LocalClient.Create starting
	I0923 12:28:24.903984  670144 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem
	I0923 12:28:24.971472  670144 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem
	I0923 12:28:25.199996  670144 main.go:141] libmachine: Running pre-create checks...
	I0923 12:28:25.200025  670144 main.go:141] libmachine: (addons-052630) Calling .PreCreateCheck
	I0923 12:28:25.200603  670144 main.go:141] libmachine: (addons-052630) Calling .GetConfigRaw
	I0923 12:28:25.201064  670144 main.go:141] libmachine: Creating machine...
	I0923 12:28:25.201081  670144 main.go:141] libmachine: (addons-052630) Calling .Create
	I0923 12:28:25.201318  670144 main.go:141] libmachine: (addons-052630) Creating KVM machine...
	I0923 12:28:25.202978  670144 main.go:141] libmachine: (addons-052630) DBG | found existing default KVM network
	I0923 12:28:25.203985  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:25.203807  670166 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231f0}
	I0923 12:28:25.204034  670144 main.go:141] libmachine: (addons-052630) DBG | created network xml: 
	I0923 12:28:25.204055  670144 main.go:141] libmachine: (addons-052630) DBG | <network>
	I0923 12:28:25.204063  670144 main.go:141] libmachine: (addons-052630) DBG |   <name>mk-addons-052630</name>
	I0923 12:28:25.204070  670144 main.go:141] libmachine: (addons-052630) DBG |   <dns enable='no'/>
	I0923 12:28:25.204076  670144 main.go:141] libmachine: (addons-052630) DBG |   
	I0923 12:28:25.204082  670144 main.go:141] libmachine: (addons-052630) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0923 12:28:25.204088  670144 main.go:141] libmachine: (addons-052630) DBG |     <dhcp>
	I0923 12:28:25.204093  670144 main.go:141] libmachine: (addons-052630) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0923 12:28:25.204101  670144 main.go:141] libmachine: (addons-052630) DBG |     </dhcp>
	I0923 12:28:25.204105  670144 main.go:141] libmachine: (addons-052630) DBG |   </ip>
	I0923 12:28:25.204112  670144 main.go:141] libmachine: (addons-052630) DBG |   
	I0923 12:28:25.204119  670144 main.go:141] libmachine: (addons-052630) DBG | </network>
	I0923 12:28:25.204129  670144 main.go:141] libmachine: (addons-052630) DBG | 
	I0923 12:28:25.209600  670144 main.go:141] libmachine: (addons-052630) DBG | trying to create private KVM network mk-addons-052630 192.168.39.0/24...
	I0923 12:28:25.278429  670144 main.go:141] libmachine: (addons-052630) Setting up store path in /home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630 ...
	I0923 12:28:25.278462  670144 main.go:141] libmachine: (addons-052630) DBG | private KVM network mk-addons-052630 192.168.39.0/24 created
	I0923 12:28:25.278471  670144 main.go:141] libmachine: (addons-052630) Building disk image from file:///home/jenkins/minikube-integration/19690-662205/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 12:28:25.278507  670144 main.go:141] libmachine: (addons-052630) Downloading /home/jenkins/minikube-integration/19690-662205/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19690-662205/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 12:28:25.278523  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:25.278366  670166 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:28:25.561478  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:25.561306  670166 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa...
	I0923 12:28:25.781646  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:25.781463  670166 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/addons-052630.rawdisk...
	I0923 12:28:25.781686  670144 main.go:141] libmachine: (addons-052630) DBG | Writing magic tar header
	I0923 12:28:25.781699  670144 main.go:141] libmachine: (addons-052630) DBG | Writing SSH key tar header
	I0923 12:28:25.781710  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:25.781618  670166 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630 ...
	I0923 12:28:25.781843  670144 main.go:141] libmachine: (addons-052630) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630
	I0923 12:28:25.781876  670144 main.go:141] libmachine: (addons-052630) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630 (perms=drwx------)
	I0923 12:28:25.781893  670144 main.go:141] libmachine: (addons-052630) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube/machines
	I0923 12:28:25.781906  670144 main.go:141] libmachine: (addons-052630) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube/machines (perms=drwxr-xr-x)
	I0923 12:28:25.781926  670144 main.go:141] libmachine: (addons-052630) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:28:25.781942  670144 main.go:141] libmachine: (addons-052630) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube (perms=drwxr-xr-x)
	I0923 12:28:25.781979  670144 main.go:141] libmachine: (addons-052630) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205 (perms=drwxrwxr-x)
	I0923 12:28:25.781995  670144 main.go:141] libmachine: (addons-052630) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205
	I0923 12:28:25.782008  670144 main.go:141] libmachine: (addons-052630) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 12:28:25.782019  670144 main.go:141] libmachine: (addons-052630) DBG | Checking permissions on dir: /home/jenkins
	I0923 12:28:25.782030  670144 main.go:141] libmachine: (addons-052630) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 12:28:25.782042  670144 main.go:141] libmachine: (addons-052630) DBG | Checking permissions on dir: /home
	I0923 12:28:25.782054  670144 main.go:141] libmachine: (addons-052630) DBG | Skipping /home - not owner
	I0923 12:28:25.782073  670144 main.go:141] libmachine: (addons-052630) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 12:28:25.782083  670144 main.go:141] libmachine: (addons-052630) Creating domain...
	I0923 12:28:25.783344  670144 main.go:141] libmachine: (addons-052630) define libvirt domain using xml: 
	I0923 12:28:25.783364  670144 main.go:141] libmachine: (addons-052630) <domain type='kvm'>
	I0923 12:28:25.783372  670144 main.go:141] libmachine: (addons-052630)   <name>addons-052630</name>
	I0923 12:28:25.783376  670144 main.go:141] libmachine: (addons-052630)   <memory unit='MiB'>4000</memory>
	I0923 12:28:25.783381  670144 main.go:141] libmachine: (addons-052630)   <vcpu>2</vcpu>
	I0923 12:28:25.783385  670144 main.go:141] libmachine: (addons-052630)   <features>
	I0923 12:28:25.783390  670144 main.go:141] libmachine: (addons-052630)     <acpi/>
	I0923 12:28:25.783396  670144 main.go:141] libmachine: (addons-052630)     <apic/>
	I0923 12:28:25.783403  670144 main.go:141] libmachine: (addons-052630)     <pae/>
	I0923 12:28:25.783409  670144 main.go:141] libmachine: (addons-052630)     
	I0923 12:28:25.783417  670144 main.go:141] libmachine: (addons-052630)   </features>
	I0923 12:28:25.783427  670144 main.go:141] libmachine: (addons-052630)   <cpu mode='host-passthrough'>
	I0923 12:28:25.783435  670144 main.go:141] libmachine: (addons-052630)   
	I0923 12:28:25.783446  670144 main.go:141] libmachine: (addons-052630)   </cpu>
	I0923 12:28:25.783453  670144 main.go:141] libmachine: (addons-052630)   <os>
	I0923 12:28:25.783463  670144 main.go:141] libmachine: (addons-052630)     <type>hvm</type>
	I0923 12:28:25.783477  670144 main.go:141] libmachine: (addons-052630)     <boot dev='cdrom'/>
	I0923 12:28:25.783486  670144 main.go:141] libmachine: (addons-052630)     <boot dev='hd'/>
	I0923 12:28:25.783493  670144 main.go:141] libmachine: (addons-052630)     <bootmenu enable='no'/>
	I0923 12:28:25.783502  670144 main.go:141] libmachine: (addons-052630)   </os>
	I0923 12:28:25.783511  670144 main.go:141] libmachine: (addons-052630)   <devices>
	I0923 12:28:25.783529  670144 main.go:141] libmachine: (addons-052630)     <disk type='file' device='cdrom'>
	I0923 12:28:25.783552  670144 main.go:141] libmachine: (addons-052630)       <source file='/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/boot2docker.iso'/>
	I0923 12:28:25.783577  670144 main.go:141] libmachine: (addons-052630)       <target dev='hdc' bus='scsi'/>
	I0923 12:28:25.783588  670144 main.go:141] libmachine: (addons-052630)       <readonly/>
	I0923 12:28:25.783595  670144 main.go:141] libmachine: (addons-052630)     </disk>
	I0923 12:28:25.783607  670144 main.go:141] libmachine: (addons-052630)     <disk type='file' device='disk'>
	I0923 12:28:25.783618  670144 main.go:141] libmachine: (addons-052630)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 12:28:25.783633  670144 main.go:141] libmachine: (addons-052630)       <source file='/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/addons-052630.rawdisk'/>
	I0923 12:28:25.783643  670144 main.go:141] libmachine: (addons-052630)       <target dev='hda' bus='virtio'/>
	I0923 12:28:25.783719  670144 main.go:141] libmachine: (addons-052630)     </disk>
	I0923 12:28:25.783743  670144 main.go:141] libmachine: (addons-052630)     <interface type='network'>
	I0923 12:28:25.783752  670144 main.go:141] libmachine: (addons-052630)       <source network='mk-addons-052630'/>
	I0923 12:28:25.783766  670144 main.go:141] libmachine: (addons-052630)       <model type='virtio'/>
	I0923 12:28:25.783776  670144 main.go:141] libmachine: (addons-052630)     </interface>
	I0923 12:28:25.783789  670144 main.go:141] libmachine: (addons-052630)     <interface type='network'>
	I0923 12:28:25.783807  670144 main.go:141] libmachine: (addons-052630)       <source network='default'/>
	I0923 12:28:25.783821  670144 main.go:141] libmachine: (addons-052630)       <model type='virtio'/>
	I0923 12:28:25.783832  670144 main.go:141] libmachine: (addons-052630)     </interface>
	I0923 12:28:25.783845  670144 main.go:141] libmachine: (addons-052630)     <serial type='pty'>
	I0923 12:28:25.783856  670144 main.go:141] libmachine: (addons-052630)       <target port='0'/>
	I0923 12:28:25.783866  670144 main.go:141] libmachine: (addons-052630)     </serial>
	I0923 12:28:25.783878  670144 main.go:141] libmachine: (addons-052630)     <console type='pty'>
	I0923 12:28:25.783909  670144 main.go:141] libmachine: (addons-052630)       <target type='serial' port='0'/>
	I0923 12:28:25.783928  670144 main.go:141] libmachine: (addons-052630)     </console>
	I0923 12:28:25.783942  670144 main.go:141] libmachine: (addons-052630)     <rng model='virtio'>
	I0923 12:28:25.783955  670144 main.go:141] libmachine: (addons-052630)       <backend model='random'>/dev/random</backend>
	I0923 12:28:25.783971  670144 main.go:141] libmachine: (addons-052630)     </rng>
	I0923 12:28:25.783993  670144 main.go:141] libmachine: (addons-052630)     
	I0923 12:28:25.784002  670144 main.go:141] libmachine: (addons-052630)     
	I0923 12:28:25.784006  670144 main.go:141] libmachine: (addons-052630)   </devices>
	I0923 12:28:25.784016  670144 main.go:141] libmachine: (addons-052630) </domain>
	I0923 12:28:25.784025  670144 main.go:141] libmachine: (addons-052630) 
	I0923 12:28:25.788537  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:fa:ec:fb in network default
	I0923 12:28:25.789254  670144 main.go:141] libmachine: (addons-052630) Ensuring networks are active...
	I0923 12:28:25.789279  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:25.790127  670144 main.go:141] libmachine: (addons-052630) Ensuring network default is active
	I0923 12:28:25.790514  670144 main.go:141] libmachine: (addons-052630) Ensuring network mk-addons-052630 is active
	I0923 12:28:25.791168  670144 main.go:141] libmachine: (addons-052630) Getting domain xml...
	I0923 12:28:25.792095  670144 main.go:141] libmachine: (addons-052630) Creating domain...
	I0923 12:28:27.038227  670144 main.go:141] libmachine: (addons-052630) Waiting to get IP...
	I0923 12:28:27.038933  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:27.039372  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:27.039471  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:27.039378  670166 retry.go:31] will retry after 209.573222ms: waiting for machine to come up
	I0923 12:28:27.250785  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:27.251320  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:27.251357  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:27.251238  670166 retry.go:31] will retry after 325.370385ms: waiting for machine to come up
	I0923 12:28:27.577921  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:27.578545  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:27.578574  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:27.578492  670166 retry.go:31] will retry after 474.794229ms: waiting for machine to come up
	I0923 12:28:28.055184  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:28.055670  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:28.055696  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:28.055630  670166 retry.go:31] will retry after 474.62618ms: waiting for machine to come up
	I0923 12:28:28.532060  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:28.532544  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:28.532570  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:28.532497  670166 retry.go:31] will retry after 466.59648ms: waiting for machine to come up
	I0923 12:28:29.001527  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:29.002034  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:29.002061  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:29.001954  670166 retry.go:31] will retry after 665.819727ms: waiting for machine to come up
	I0923 12:28:29.670150  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:29.670557  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:29.670586  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:29.670496  670166 retry.go:31] will retry after 826.725256ms: waiting for machine to come up
	I0923 12:28:30.499346  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:30.499773  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:30.499804  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:30.499717  670166 retry.go:31] will retry after 1.111672977s: waiting for machine to come up
	I0923 12:28:31.612864  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:31.613371  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:31.613397  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:31.613333  670166 retry.go:31] will retry after 1.267221609s: waiting for machine to come up
	I0923 12:28:32.882782  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:32.883202  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:32.883225  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:32.883150  670166 retry.go:31] will retry after 2.15228845s: waiting for machine to come up
	I0923 12:28:35.036699  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:35.037202  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:35.037238  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:35.037140  670166 retry.go:31] will retry after 2.618330832s: waiting for machine to come up
	I0923 12:28:37.659044  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:37.659708  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:37.659740  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:37.659658  670166 retry.go:31] will retry after 3.182891363s: waiting for machine to come up
	I0923 12:28:40.843714  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:40.844042  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:40.844066  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:40.843990  670166 retry.go:31] will retry after 4.470723393s: waiting for machine to come up
	I0923 12:28:45.316645  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.317132  670144 main.go:141] libmachine: (addons-052630) Found IP for machine: 192.168.39.225
	I0923 12:28:45.317158  670144 main.go:141] libmachine: (addons-052630) Reserving static IP address...
	I0923 12:28:45.317201  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has current primary IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.317585  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find host DHCP lease matching {name: "addons-052630", mac: "52:54:00:6d:fc:98", ip: "192.168.39.225"} in network mk-addons-052630
	I0923 12:28:45.396974  670144 main.go:141] libmachine: (addons-052630) Reserved static IP address: 192.168.39.225
	I0923 12:28:45.397017  670144 main.go:141] libmachine: (addons-052630) Waiting for SSH to be available...
	I0923 12:28:45.397030  670144 main.go:141] libmachine: (addons-052630) DBG | Getting to WaitForSSH function...
	I0923 12:28:45.399773  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.400242  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:45.400280  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.400442  670144 main.go:141] libmachine: (addons-052630) DBG | Using SSH client type: external
	I0923 12:28:45.400468  670144 main.go:141] libmachine: (addons-052630) DBG | Using SSH private key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa (-rw-------)
	I0923 12:28:45.400508  670144 main.go:141] libmachine: (addons-052630) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 12:28:45.400526  670144 main.go:141] libmachine: (addons-052630) DBG | About to run SSH command:
	I0923 12:28:45.400541  670144 main.go:141] libmachine: (addons-052630) DBG | exit 0
	I0923 12:28:45.526239  670144 main.go:141] libmachine: (addons-052630) DBG | SSH cmd err, output: <nil>: 
	I0923 12:28:45.526548  670144 main.go:141] libmachine: (addons-052630) KVM machine creation complete!
	I0923 12:28:45.526929  670144 main.go:141] libmachine: (addons-052630) Calling .GetConfigRaw
	I0923 12:28:45.527556  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:28:45.527717  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:28:45.527840  670144 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 12:28:45.527856  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:28:45.529429  670144 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 12:28:45.529452  670144 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 12:28:45.529459  670144 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 12:28:45.529467  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:45.531511  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.531931  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:45.531976  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.532096  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:45.532276  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:45.532439  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:45.532595  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:45.532719  670144 main.go:141] libmachine: Using SSH client type: native
	I0923 12:28:45.532912  670144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0923 12:28:45.532928  670144 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 12:28:45.641401  670144 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:28:45.641429  670144 main.go:141] libmachine: Detecting the provisioner...
	I0923 12:28:45.641436  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:45.644203  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.644585  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:45.644605  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.644794  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:45.645002  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:45.645132  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:45.645234  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:45.645389  670144 main.go:141] libmachine: Using SSH client type: native
	I0923 12:28:45.645579  670144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0923 12:28:45.645589  670144 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 12:28:45.754409  670144 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 12:28:45.754564  670144 main.go:141] libmachine: found compatible host: buildroot
	I0923 12:28:45.754586  670144 main.go:141] libmachine: Provisioning with buildroot...
	I0923 12:28:45.754597  670144 main.go:141] libmachine: (addons-052630) Calling .GetMachineName
	I0923 12:28:45.754895  670144 buildroot.go:166] provisioning hostname "addons-052630"
	I0923 12:28:45.754923  670144 main.go:141] libmachine: (addons-052630) Calling .GetMachineName
	I0923 12:28:45.755128  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:45.758313  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.758762  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:45.758793  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.758946  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:45.759146  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:45.759329  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:45.759482  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:45.759643  670144 main.go:141] libmachine: Using SSH client type: native
	I0923 12:28:45.759825  670144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0923 12:28:45.759836  670144 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-052630 && echo "addons-052630" | sudo tee /etc/hostname
	I0923 12:28:45.884101  670144 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-052630
	
	I0923 12:28:45.884147  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:45.886809  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.887156  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:45.887190  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.887396  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:45.887621  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:45.887844  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:45.887995  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:45.888203  670144 main.go:141] libmachine: Using SSH client type: native
	I0923 12:28:45.888386  670144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0923 12:28:45.888401  670144 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-052630' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-052630/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-052630' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:28:46.010925  670144 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:28:46.010962  670144 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19690-662205/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-662205/.minikube}
	I0923 12:28:46.011014  670144 buildroot.go:174] setting up certificates
	I0923 12:28:46.011029  670144 provision.go:84] configureAuth start
	I0923 12:28:46.011047  670144 main.go:141] libmachine: (addons-052630) Calling .GetMachineName
	I0923 12:28:46.011410  670144 main.go:141] libmachine: (addons-052630) Calling .GetIP
	I0923 12:28:46.014459  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.014799  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.014825  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.014976  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:46.017411  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.017737  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.017810  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.017885  670144 provision.go:143] copyHostCerts
	I0923 12:28:46.017961  670144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem (1082 bytes)
	I0923 12:28:46.018127  670144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem (1123 bytes)
	I0923 12:28:46.018208  670144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem (1675 bytes)
	I0923 12:28:46.018272  670144 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem org=jenkins.addons-052630 san=[127.0.0.1 192.168.39.225 addons-052630 localhost minikube]
	I0923 12:28:46.112323  670144 provision.go:177] copyRemoteCerts
	I0923 12:28:46.112412  670144 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:28:46.112450  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:46.115251  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.115655  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.115682  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.115895  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:46.116119  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:46.116317  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:46.116487  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:28:46.199745  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 12:28:46.222501  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 12:28:46.245931  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 12:28:46.268307  670144 provision.go:87] duration metric: took 257.259613ms to configureAuth
	I0923 12:28:46.268338  670144 buildroot.go:189] setting minikube options for container-runtime
	I0923 12:28:46.268561  670144 config.go:182] Loaded profile config "addons-052630": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:28:46.268643  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:46.271831  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.272263  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.272294  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.272469  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:46.272699  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:46.272868  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:46.273026  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:46.273169  670144 main.go:141] libmachine: Using SSH client type: native
	I0923 12:28:46.273365  670144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0923 12:28:46.273385  670144 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 12:28:46.493088  670144 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 12:28:46.493128  670144 main.go:141] libmachine: Checking connection to Docker...
	I0923 12:28:46.493136  670144 main.go:141] libmachine: (addons-052630) Calling .GetURL
	I0923 12:28:46.494629  670144 main.go:141] libmachine: (addons-052630) DBG | Using libvirt version 6000000
	I0923 12:28:46.496809  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.497168  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.497204  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.497405  670144 main.go:141] libmachine: Docker is up and running!
	I0923 12:28:46.497422  670144 main.go:141] libmachine: Reticulating splines...
	I0923 12:28:46.497430  670144 client.go:171] duration metric: took 21.593485371s to LocalClient.Create
	I0923 12:28:46.497459  670144 start.go:167] duration metric: took 21.593561276s to libmachine.API.Create "addons-052630"
	I0923 12:28:46.497469  670144 start.go:293] postStartSetup for "addons-052630" (driver="kvm2")
	I0923 12:28:46.497479  670144 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:28:46.497499  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:28:46.497777  670144 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:28:46.497812  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:46.501032  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.501490  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.501519  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.501865  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:46.502081  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:46.502366  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:46.502522  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:28:46.587938  670144 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:28:46.592031  670144 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 12:28:46.592074  670144 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/addons for local assets ...
	I0923 12:28:46.592166  670144 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/files for local assets ...
	I0923 12:28:46.592204  670144 start.go:296] duration metric: took 94.729785ms for postStartSetup
	I0923 12:28:46.592263  670144 main.go:141] libmachine: (addons-052630) Calling .GetConfigRaw
	I0923 12:28:46.592996  670144 main.go:141] libmachine: (addons-052630) Calling .GetIP
	I0923 12:28:46.595992  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.596372  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.596398  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.596737  670144 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/config.json ...
	I0923 12:28:46.596934  670144 start.go:128] duration metric: took 21.712346872s to createHost
	I0923 12:28:46.596958  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:46.599418  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.599733  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.599767  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.599907  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:46.600079  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:46.600203  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:46.600310  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:46.600443  670144 main.go:141] libmachine: Using SSH client type: native
	I0923 12:28:46.600620  670144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0923 12:28:46.600630  670144 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 12:28:46.710677  670144 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727094526.683192770
	
	I0923 12:28:46.710703  670144 fix.go:216] guest clock: 1727094526.683192770
	I0923 12:28:46.710711  670144 fix.go:229] Guest: 2024-09-23 12:28:46.68319277 +0000 UTC Remote: 2024-09-23 12:28:46.596946256 +0000 UTC m=+21.821646719 (delta=86.246514ms)
	I0923 12:28:46.710733  670144 fix.go:200] guest clock delta is within tolerance: 86.246514ms
	I0923 12:28:46.710738  670144 start.go:83] releasing machines lock for "addons-052630", held for 21.826289183s
	I0923 12:28:46.710760  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:28:46.711055  670144 main.go:141] libmachine: (addons-052630) Calling .GetIP
	I0923 12:28:46.713772  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.714188  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.714222  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.714387  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:28:46.714956  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:28:46.715183  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:28:46.715309  670144 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 12:28:46.715383  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:46.715446  670144 ssh_runner.go:195] Run: cat /version.json
	I0923 12:28:46.715472  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:46.718318  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.718628  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.718658  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.718683  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.718845  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:46.719062  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:46.719075  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.719096  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.719238  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:46.719257  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:46.719450  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:46.719450  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:28:46.719543  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:46.719701  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:28:46.832898  670144 ssh_runner.go:195] Run: systemctl --version
	I0923 12:28:46.838565  670144 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 12:28:46.993556  670144 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 12:28:46.999180  670144 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 12:28:46.999247  670144 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 12:28:47.014650  670144 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 12:28:47.014678  670144 start.go:495] detecting cgroup driver to use...
	I0923 12:28:47.014749  670144 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 12:28:47.031900  670144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:28:47.045836  670144 docker.go:217] disabling cri-docker service (if available) ...
	I0923 12:28:47.045894  670144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 12:28:47.059242  670144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 12:28:47.072860  670144 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 12:28:47.194879  670144 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 12:28:47.358066  670144 docker.go:233] disabling docker service ...
	I0923 12:28:47.358133  670144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 12:28:47.371586  670144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 12:28:47.384467  670144 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 12:28:47.500779  670144 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 12:28:47.617653  670144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 12:28:47.631869  670144 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:28:47.649294  670144 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 12:28:47.649381  670144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:28:47.659959  670144 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 12:28:47.660033  670144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:28:47.670550  670144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:28:47.680493  670144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:28:47.691259  670144 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 12:28:47.702167  670144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:28:47.712481  670144 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:28:47.729016  670144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:28:47.738741  670144 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 12:28:47.747902  670144 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 12:28:47.747976  670144 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 12:28:47.759825  670144 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 12:28:47.770483  670144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:28:47.890638  670144 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 12:28:47.979539  670144 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 12:28:47.979633  670144 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 12:28:47.984471  670144 start.go:563] Will wait 60s for crictl version
	I0923 12:28:47.984558  670144 ssh_runner.go:195] Run: which crictl
	I0923 12:28:47.988396  670144 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 12:28:48.030420  670144 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 12:28:48.030521  670144 ssh_runner.go:195] Run: crio --version
	I0923 12:28:48.056969  670144 ssh_runner.go:195] Run: crio --version
	I0923 12:28:48.087115  670144 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 12:28:48.088250  670144 main.go:141] libmachine: (addons-052630) Calling .GetIP
	I0923 12:28:48.091126  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:48.091525  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:48.091557  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:48.091833  670144 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 12:28:48.095821  670144 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:28:48.107261  670144 kubeadm.go:883] updating cluster {Name:addons-052630 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-052630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 12:28:48.107375  670144 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 12:28:48.107425  670144 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 12:28:48.137489  670144 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0923 12:28:48.137564  670144 ssh_runner.go:195] Run: which lz4
	I0923 12:28:48.141366  670144 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 12:28:48.145228  670144 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 12:28:48.145266  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0923 12:28:49.300797  670144 crio.go:462] duration metric: took 1.159457126s to copy over tarball
	I0923 12:28:49.300880  670144 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 12:28:51.403387  670144 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.10247438s)
	I0923 12:28:51.403418  670144 crio.go:469] duration metric: took 2.102584932s to extract the tarball
	I0923 12:28:51.403426  670144 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 12:28:51.439644  670144 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 12:28:51.487343  670144 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 12:28:51.487372  670144 cache_images.go:84] Images are preloaded, skipping loading
	I0923 12:28:51.487380  670144 kubeadm.go:934] updating node { 192.168.39.225 8443 v1.31.1 crio true true} ...
	I0923 12:28:51.487484  670144 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-052630 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-052630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 12:28:51.487549  670144 ssh_runner.go:195] Run: crio config
	I0923 12:28:51.529159  670144 cni.go:84] Creating CNI manager for ""
	I0923 12:28:51.529194  670144 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 12:28:51.529211  670144 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 12:28:51.529243  670144 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.225 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-052630 NodeName:addons-052630 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 12:28:51.529421  670144 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-052630"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 12:28:51.529489  670144 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 12:28:51.538786  670144 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 12:28:51.538860  670144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 12:28:51.547357  670144 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0923 12:28:51.563034  670144 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 12:28:51.579309  670144 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0923 12:28:51.595202  670144 ssh_runner.go:195] Run: grep 192.168.39.225	control-plane.minikube.internal$ /etc/hosts
	I0923 12:28:51.598885  670144 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.225	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:28:51.610214  670144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:28:51.733757  670144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:28:51.750735  670144 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630 for IP: 192.168.39.225
	I0923 12:28:51.750770  670144 certs.go:194] generating shared ca certs ...
	I0923 12:28:51.750794  670144 certs.go:226] acquiring lock for ca certs: {Name:mk5f47b34d40554f07f6507fea971236e4735d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:51.751013  670144 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key
	I0923 12:28:51.991610  670144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt ...
	I0923 12:28:51.991645  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt: {Name:mk278617102c801f9caeeac933d8c272fa433146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:51.991889  670144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key ...
	I0923 12:28:51.991905  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key: {Name:mk95fd2f326ff7501892adf485a2ad45653eea64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:51.992016  670144 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key
	I0923 12:28:52.107448  670144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt ...
	I0923 12:28:52.107483  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt: {Name:mkab8a60190e4e6c41e7af4f15f6ef17b87ed124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:52.107687  670144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key ...
	I0923 12:28:52.107702  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key: {Name:mk02e351bcbba1d3a2fba48c9faa8507f1dc7f2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:52.107800  670144 certs.go:256] generating profile certs ...
	I0923 12:28:52.107883  670144 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.key
	I0923 12:28:52.107915  670144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt with IP's: []
	I0923 12:28:52.582241  670144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt ...
	I0923 12:28:52.582281  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: {Name:mkaf7ea4dbed68876d268afef229ce386755abe4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:52.582498  670144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.key ...
	I0923 12:28:52.582514  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.key: {Name:mkdce34cb498d97b74470517b32fdf3aa826f879 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:52.582615  670144 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.key.4809edca
	I0923 12:28:52.582638  670144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.crt.4809edca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.225]
	I0923 12:28:52.768950  670144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.crt.4809edca ...
	I0923 12:28:52.768994  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.crt.4809edca: {Name:mkbaa634fbd0b311944b39e34f00f96971e7ce59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:52.769251  670144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.key.4809edca ...
	I0923 12:28:52.769274  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.key.4809edca: {Name:mkf94e3b64c79f3950341d5ac1c59fe9bdbc9286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:52.769399  670144 certs.go:381] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.crt.4809edca -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.crt
	I0923 12:28:52.769586  670144 certs.go:385] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.key.4809edca -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.key
	I0923 12:28:52.769706  670144 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/proxy-client.key
	I0923 12:28:52.769730  670144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/proxy-client.crt with IP's: []
	I0923 12:28:52.993061  670144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/proxy-client.crt ...
	I0923 12:28:52.993100  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/proxy-client.crt: {Name:mkc6749530eb8ff541e082b9ac5787b31147fda9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:52.993317  670144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/proxy-client.key ...
	I0923 12:28:52.993335  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/proxy-client.key: {Name:mk1f12283a82c9b262b0a92c2d76e010fb6f0100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:52.993550  670144 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 12:28:52.993587  670144 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem (1082 bytes)
	I0923 12:28:52.993614  670144 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem (1123 bytes)
	I0923 12:28:52.993635  670144 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem (1675 bytes)
	I0923 12:28:52.994363  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 12:28:53.025659  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 12:28:53.052117  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 12:28:53.077309  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 12:28:53.103143  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 12:28:53.126620  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 12:28:53.149963  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 12:28:53.173855  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 12:28:53.197238  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 12:28:53.220421  670144 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 12:28:53.236569  670144 ssh_runner.go:195] Run: openssl version
	I0923 12:28:53.242319  670144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 12:28:53.253251  670144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:28:53.257949  670144 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 12:28 /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:28:53.258030  670144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:28:53.264286  670144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 12:28:53.275223  670144 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 12:28:53.279442  670144 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 12:28:53.279513  670144 kubeadm.go:392] StartCluster: {Name:addons-052630 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-052630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:28:53.279600  670144 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 12:28:53.279685  670144 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 12:28:53.314839  670144 cri.go:89] found id: ""
	I0923 12:28:53.314909  670144 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 12:28:53.327186  670144 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 12:28:53.336989  670144 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 12:28:53.361585  670144 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 12:28:53.361612  670144 kubeadm.go:157] found existing configuration files:
	
	I0923 12:28:53.361662  670144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 12:28:53.381977  670144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 12:28:53.382054  670144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 12:28:53.392118  670144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 12:28:53.401098  670144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 12:28:53.401165  670144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 12:28:53.410993  670144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 12:28:53.420212  670144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 12:28:53.420273  670144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 12:28:53.429796  670144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 12:28:53.439423  670144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 12:28:53.439499  670144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 12:28:53.449163  670144 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 12:28:53.502584  670144 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 12:28:53.502741  670144 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 12:28:53.605559  670144 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 12:28:53.605689  670144 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 12:28:53.605816  670144 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 12:28:53.618515  670144 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 12:28:53.836787  670144 out.go:235]   - Generating certificates and keys ...
	I0923 12:28:53.836912  670144 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 12:28:53.836995  670144 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 12:28:53.873040  670144 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 12:28:54.032114  670144 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 12:28:54.141767  670144 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 12:28:54.255622  670144 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 12:28:54.855891  670144 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 12:28:54.856105  670144 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-052630 localhost] and IPs [192.168.39.225 127.0.0.1 ::1]
	I0923 12:28:55.008507  670144 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 12:28:55.008690  670144 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-052630 localhost] and IPs [192.168.39.225 127.0.0.1 ::1]
	I0923 12:28:55.205727  670144 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 12:28:55.375985  670144 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 12:28:55.604036  670144 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 12:28:55.604271  670144 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 12:28:55.664982  670144 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 12:28:55.716232  670144 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 12:28:55.974342  670144 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 12:28:56.056044  670144 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 12:28:56.242837  670144 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 12:28:56.243301  670144 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 12:28:56.245752  670144 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 12:28:56.248113  670144 out.go:235]   - Booting up control plane ...
	I0923 12:28:56.248255  670144 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 12:28:56.248368  670144 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 12:28:56.248457  670144 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 12:28:56.267013  670144 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 12:28:56.273131  670144 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 12:28:56.273201  670144 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 12:28:56.405616  670144 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 12:28:56.405814  670144 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 12:28:57.405800  670144 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001202262s
	I0923 12:28:57.405948  670144 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 12:29:02.406200  670144 kubeadm.go:310] [api-check] The API server is healthy after 5.001766702s
	I0923 12:29:02.416901  670144 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 12:29:02.435826  670144 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 12:29:02.465176  670144 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 12:29:02.465450  670144 kubeadm.go:310] [mark-control-plane] Marking the node addons-052630 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 12:29:02.478428  670144 kubeadm.go:310] [bootstrap-token] Using token: 6nlf9d.x8d4dbn01qyxu2me
	I0923 12:29:02.480122  670144 out.go:235]   - Configuring RBAC rules ...
	I0923 12:29:02.480273  670144 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 12:29:02.484831  670144 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 12:29:02.498051  670144 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 12:29:02.506535  670144 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 12:29:02.510753  670144 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 12:29:02.514110  670144 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 12:29:02.816841  670144 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 12:29:03.265469  670144 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 12:29:03.814814  670144 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 12:29:03.815665  670144 kubeadm.go:310] 
	I0923 12:29:03.815740  670144 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 12:29:03.815754  670144 kubeadm.go:310] 
	I0923 12:29:03.815856  670144 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 12:29:03.815884  670144 kubeadm.go:310] 
	I0923 12:29:03.815943  670144 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 12:29:03.816033  670144 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 12:29:03.816112  670144 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 12:29:03.816122  670144 kubeadm.go:310] 
	I0923 12:29:03.816205  670144 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 12:29:03.816220  670144 kubeadm.go:310] 
	I0923 12:29:03.816283  670144 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 12:29:03.816292  670144 kubeadm.go:310] 
	I0923 12:29:03.816361  670144 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 12:29:03.816459  670144 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 12:29:03.816557  670144 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 12:29:03.816565  670144 kubeadm.go:310] 
	I0923 12:29:03.816662  670144 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 12:29:03.816807  670144 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 12:29:03.816828  670144 kubeadm.go:310] 
	I0923 12:29:03.816928  670144 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6nlf9d.x8d4dbn01qyxu2me \
	I0923 12:29:03.817053  670144 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fc29dc81bde6bbaef9ddbc91342eaa216189e2d814cc53e215aada75bebb1ff \
	I0923 12:29:03.817087  670144 kubeadm.go:310] 	--control-plane 
	I0923 12:29:03.817098  670144 kubeadm.go:310] 
	I0923 12:29:03.817208  670144 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 12:29:03.817218  670144 kubeadm.go:310] 
	I0923 12:29:03.817336  670144 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6nlf9d.x8d4dbn01qyxu2me \
	I0923 12:29:03.817491  670144 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fc29dc81bde6bbaef9ddbc91342eaa216189e2d814cc53e215aada75bebb1ff 
	I0923 12:29:03.818641  670144 kubeadm.go:310] W0923 12:28:53.480461     822 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:29:03.818988  670144 kubeadm.go:310] W0923 12:28:53.482044     822 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:29:03.819085  670144 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 12:29:03.819100  670144 cni.go:84] Creating CNI manager for ""
	I0923 12:29:03.819107  670144 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 12:29:03.821098  670144 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 12:29:03.822568  670144 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 12:29:03.832801  670144 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 12:29:03.849124  670144 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 12:29:03.849234  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:03.849289  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-052630 minikube.k8s.io/updated_at=2024_09_23T12_29_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=addons-052630 minikube.k8s.io/primary=true
	I0923 12:29:03.869073  670144 ops.go:34] apiserver oom_adj: -16
	I0923 12:29:03.987718  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:04.487902  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:04.988414  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:05.488480  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:05.988814  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:06.488344  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:06.987998  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:07.487981  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:07.987977  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:08.098139  670144 kubeadm.go:1113] duration metric: took 4.248990269s to wait for elevateKubeSystemPrivileges
	I0923 12:29:08.098178  670144 kubeadm.go:394] duration metric: took 14.818670797s to StartCluster
	I0923 12:29:08.098199  670144 settings.go:142] acquiring lock: {Name:mk3da09e51125fc906a9e1276ab490fc7b26b03f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:29:08.098319  670144 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 12:29:08.098684  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/kubeconfig: {Name:mk213d38080414fbe499f6509d2653fd99103348 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:29:08.098883  670144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 12:29:08.098897  670144 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:29:08.098959  670144 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 12:29:08.099099  670144 addons.go:69] Setting yakd=true in profile "addons-052630"
	I0923 12:29:08.099104  670144 config.go:182] Loaded profile config "addons-052630": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:29:08.099133  670144 addons.go:234] Setting addon yakd=true in "addons-052630"
	I0923 12:29:08.099140  670144 addons.go:69] Setting inspektor-gadget=true in profile "addons-052630"
	I0923 12:29:08.099148  670144 addons.go:69] Setting default-storageclass=true in profile "addons-052630"
	I0923 12:29:08.099155  670144 addons.go:69] Setting ingress=true in profile "addons-052630"
	I0923 12:29:08.099164  670144 addons.go:69] Setting metrics-server=true in profile "addons-052630"
	I0923 12:29:08.099174  670144 addons.go:69] Setting cloud-spanner=true in profile "addons-052630"
	I0923 12:29:08.099179  670144 addons.go:234] Setting addon ingress=true in "addons-052630"
	I0923 12:29:08.099186  670144 addons.go:234] Setting addon metrics-server=true in "addons-052630"
	I0923 12:29:08.099174  670144 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-052630"
	I0923 12:29:08.099213  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.099168  670144 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-052630"
	I0923 12:29:08.099224  670144 addons.go:69] Setting storage-provisioner=true in profile "addons-052630"
	I0923 12:29:08.099247  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.099248  670144 addons.go:234] Setting addon storage-provisioner=true in "addons-052630"
	I0923 12:29:08.099178  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.099297  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.099185  670144 addons.go:69] Setting volcano=true in profile "addons-052630"
	I0923 12:29:08.099407  670144 addons.go:234] Setting addon volcano=true in "addons-052630"
	I0923 12:29:08.099456  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.099684  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.099696  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.099705  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.099709  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.099726  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.099123  670144 addons.go:69] Setting ingress-dns=true in profile "addons-052630"
	I0923 12:29:08.099728  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.099737  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.099739  670144 addons.go:234] Setting addon ingress-dns=true in "addons-052630"
	I0923 12:29:08.099769  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.099797  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.099133  670144 addons.go:69] Setting registry=true in profile "addons-052630"
	I0923 12:29:08.099726  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.099823  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.099158  670144 addons.go:234] Setting addon inspektor-gadget=true in "addons-052630"
	I0923 12:29:08.099199  670144 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-052630"
	I0923 12:29:08.099850  670144 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-052630"
	I0923 12:29:08.099824  670144 addons.go:234] Setting addon registry=true in "addons-052630"
	I0923 12:29:08.099189  670144 addons.go:234] Setting addon cloud-spanner=true in "addons-052630"
	I0923 12:29:08.099150  670144 addons.go:69] Setting gcp-auth=true in profile "addons-052630"
	I0923 12:29:08.099904  670144 mustload.go:65] Loading cluster: addons-052630
	I0923 12:29:08.099944  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.099191  670144 addons.go:69] Setting volumesnapshots=true in profile "addons-052630"
	I0923 12:29:08.099995  670144 addons.go:234] Setting addon volumesnapshots=true in "addons-052630"
	I0923 12:29:08.100023  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.100047  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.100072  670144 config.go:182] Loaded profile config "addons-052630": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:29:08.100106  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.100108  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.100138  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.100335  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.100357  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.100427  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.100433  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.100447  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.100452  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.100507  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.100524  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.100027  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.100940  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.100978  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.099218  670144 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-052630"
	I0923 12:29:08.101095  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.101121  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.099193  670144 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-052630"
	I0923 12:29:08.101287  670144 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-052630"
	I0923 12:29:08.101320  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.101767  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.101789  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.099835  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.103920  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.110406  670144 out.go:177] * Verifying Kubernetes components...
	I0923 12:29:08.119535  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.119599  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.120427  670144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:29:08.121315  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40063
	I0923 12:29:08.131609  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43929
	I0923 12:29:08.131626  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45319
	I0923 12:29:08.131667  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.131728  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45601
	I0923 12:29:08.131769  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34841
	I0923 12:29:08.132495  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.132503  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.132728  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40563
	I0923 12:29:08.132745  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.132750  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.132759  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.133032  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.133052  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.133306  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.133386  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.133413  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.133429  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.133440  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.133482  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.133740  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.133761  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.133851  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.134081  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.134103  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.134261  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.134297  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.134429  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.134444  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.134456  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.134491  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.134545  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.134840  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.135147  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.135183  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.135520  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.135605  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.136217  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.136235  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.136747  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.137331  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.137369  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.164109  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40475
	I0923 12:29:08.164380  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45105
	I0923 12:29:08.164631  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.164825  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.165148  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.165170  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.165570  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.165782  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.165803  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.165872  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.166203  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.166826  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.166869  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.167521  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46661
	I0923 12:29:08.169501  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35785
	I0923 12:29:08.174598  670144 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-052630"
	I0923 12:29:08.178846  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.179076  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.178895  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37643
	I0923 12:29:08.178930  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36927
	I0923 12:29:08.178972  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35213
	I0923 12:29:08.178981  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46573
	I0923 12:29:08.178989  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43415
	I0923 12:29:08.179006  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38507
	I0923 12:29:08.179011  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37487
	I0923 12:29:08.180724  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.181079  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.181494  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.181522  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.181629  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.182366  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.182449  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.182465  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.182959  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.183025  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.183079  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.183168  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.183230  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.184031  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.184134  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.184154  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.184166  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.184243  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.184292  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.184307  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.184322  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.184439  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.184449  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.184993  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.185059  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.185103  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.185104  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.185125  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.185195  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.185234  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.185246  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.185293  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.185354  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35821
	I0923 12:29:08.185636  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.185676  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.186611  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.186677  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.186857  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.187550  670144 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 12:29:08.187925  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.187956  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.188199  670144 addons.go:234] Setting addon default-storageclass=true in "addons-052630"
	I0923 12:29:08.188242  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.188598  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.188651  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.188880  670144 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 12:29:08.188903  670144 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 12:29:08.188923  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.189126  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.189189  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.189258  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.189738  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.191347  670144 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 12:29:08.191425  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.193271  670144 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 12:29:08.193533  670144 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:29:08.193553  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 12:29:08.193574  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.193841  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.193953  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.194007  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.194283  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.194821  670144 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 12:29:08.194839  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 12:29:08.194858  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.195552  670144 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 12:29:08.195768  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.195845  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37407
	I0923 12:29:08.196376  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34303
	I0923 12:29:08.196521  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.196672  670144 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 12:29:08.196691  670144 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 12:29:08.196719  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.197056  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.197598  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.197684  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.197702  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.198047  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.198072  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.198113  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.198266  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.198283  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.198479  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.198489  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.198547  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.198664  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.198771  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.198953  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.198987  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.199210  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.199249  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.199775  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.199959  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.202164  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.202238  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.202474  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.202495  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.202578  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.202596  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.203141  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.203337  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.203517  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.203558  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.203645  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.203720  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.203863  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.203890  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.204069  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.204122  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.204301  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.204456  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.204512  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.204526  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.204686  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.204802  670144 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 12:29:08.204956  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.205170  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.205332  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.205461  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.206267  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.206285  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.206516  670144 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 12:29:08.206532  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 12:29:08.206551  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.206706  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.207377  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.207419  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.208406  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46335
	I0923 12:29:08.209619  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.210047  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.210073  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.210236  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.210426  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.210566  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.210684  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.219445  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35443
	I0923 12:29:08.219533  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46251
	I0923 12:29:08.219589  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41147
	I0923 12:29:08.220785  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36417
	I0923 12:29:08.222697  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45277
	I0923 12:29:08.225038  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39561
	I0923 12:29:08.230680  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.230751  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.231036  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.231200  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.231237  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.231376  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.231767  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.231972  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.231987  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.233085  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.233089  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.233147  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.233211  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.233227  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.233345  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.233361  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.233363  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.233373  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.233375  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.233386  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.233880  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.233899  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.233917  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.233942  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.233992  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.234058  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.234091  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.234676  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.234695  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.234731  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.234771  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.234892  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.235047  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.235091  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.235382  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.235459  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.236193  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.236849  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.236900  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.238129  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.238450  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.238525  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.238905  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:08.238923  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:08.239076  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:08.239089  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:08.239099  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:08.239108  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:08.239201  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.240929  670144 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 12:29:08.240995  670144 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 12:29:08.241278  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:08.242787  670144 main.go:141] libmachine: Failed to make call to close driver server: unexpected EOF
	I0923 12:29:08.242806  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	W0923 12:29:08.242897  670144 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0923 12:29:08.242950  670144 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 12:29:08.243197  670144 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 12:29:08.243226  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 12:29:08.243249  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.244528  670144 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 12:29:08.246261  670144 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 12:29:08.246338  670144 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 12:29:08.248195  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.248288  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.248307  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.248324  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.248538  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.248670  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.248779  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.250051  670144 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 12:29:08.250094  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 12:29:08.250119  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.251740  670144 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 12:29:08.253185  670144 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 12:29:08.253489  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.254182  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.254209  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.254598  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.254820  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.255024  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.255199  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.255972  670144 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 12:29:08.256311  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38947
	I0923 12:29:08.256884  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.256951  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
	I0923 12:29:08.257532  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.257556  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.257657  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.258214  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.258239  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.258317  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.258515  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.258635  670144 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 12:29:08.259348  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.259794  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.260013  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44129
	I0923 12:29:08.260784  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.260900  670144 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 12:29:08.261518  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.262280  670144 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 12:29:08.262305  670144 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 12:29:08.262329  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.263111  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33533
	I0923 12:29:08.263125  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.263182  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.263211  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.263259  670144 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 12:29:08.263553  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.263921  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.264090  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.264286  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.264224  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.264779  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.264968  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.266052  670144 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 12:29:08.266086  670144 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 12:29:08.266718  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.266760  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.267350  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.267376  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.267443  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.267645  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.267821  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.268028  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.268401  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.268717  670144 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 12:29:08.268738  670144 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 12:29:08.268757  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.269685  670144 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 12:29:08.269698  670144 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 12:29:08.270652  670144 out.go:177]   - Using image docker.io/busybox:stable
	I0923 12:29:08.271437  670144 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 12:29:08.271460  670144 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 12:29:08.271489  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.271705  670144 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 12:29:08.271764  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 12:29:08.271806  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.271995  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.272341  670144 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 12:29:08.272361  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 12:29:08.272378  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.274161  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.274186  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.274494  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.274772  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.274952  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.275114  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.275804  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.275823  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.276398  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.276424  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.276437  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.276506  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.276618  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.276764  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.276814  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.276970  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.276988  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.277148  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.277311  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.277371  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37819
	I0923 12:29:08.277484  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.277856  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.277961  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.278476  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.278486  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.278532  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.278534  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.278618  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.278754  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.278860  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.278893  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.278987  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.279199  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.280614  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	W0923 12:29:08.281601  670144 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40984->192.168.39.225:22: read: connection reset by peer
	I0923 12:29:08.281629  670144 retry.go:31] will retry after 168.892195ms: ssh: handshake failed: read tcp 192.168.39.1:40984->192.168.39.225:22: read: connection reset by peer
	I0923 12:29:08.282699  670144 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 12:29:08.283895  670144 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 12:29:08.283910  670144 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 12:29:08.283931  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.286545  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.286945  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.286960  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.287159  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.287298  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.287395  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.287501  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	W0923 12:29:08.451555  670144 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:41002->192.168.39.225:22: read: connection reset by peer
	I0923 12:29:08.451611  670144 retry.go:31] will retry after 370.404405ms: ssh: handshake failed: read tcp 192.168.39.1:41002->192.168.39.225:22: read: connection reset by peer
	I0923 12:29:08.501288  670144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:29:08.501333  670144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 12:29:08.574946  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:29:08.650848  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 12:29:08.710883  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 12:29:08.718226  670144 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 12:29:08.718254  670144 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 12:29:08.724979  670144 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 12:29:08.725012  670144 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 12:29:08.729985  670144 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 12:29:08.730007  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 12:29:08.749343  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 12:29:08.759919  670144 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 12:29:08.759951  670144 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 12:29:08.762704  670144 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 12:29:08.762725  670144 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 12:29:08.780285  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 12:29:08.797085  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 12:29:08.819576  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 12:29:08.871295  670144 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 12:29:08.871331  670144 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 12:29:08.873395  670144 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 12:29:08.873415  670144 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 12:29:08.913764  670144 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 12:29:08.913797  670144 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 12:29:08.953695  670144 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 12:29:08.953730  670144 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 12:29:08.989719  670144 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 12:29:08.989745  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 12:29:09.174275  670144 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 12:29:09.174311  670144 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 12:29:09.209701  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 12:29:09.213032  670144 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 12:29:09.213062  670144 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 12:29:09.235662  670144 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 12:29:09.235711  670144 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 12:29:09.249524  670144 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 12:29:09.249560  670144 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 12:29:09.318365  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 12:29:09.380514  670144 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 12:29:09.380546  670144 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 12:29:09.396450  670144 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 12:29:09.396479  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 12:29:09.491655  670144 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 12:29:09.491699  670144 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 12:29:09.507296  670144 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 12:29:09.507325  670144 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 12:29:09.619384  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 12:29:09.674496  670144 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 12:29:09.674532  670144 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 12:29:09.791378  670144 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 12:29:09.791409  670144 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 12:29:09.916463  670144 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 12:29:09.916518  670144 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 12:29:10.095369  670144 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 12:29:10.095403  670144 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 12:29:10.151495  670144 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 12:29:10.151529  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 12:29:10.341472  670144 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 12:29:10.341505  670144 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 12:29:10.355580  670144 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 12:29:10.355613  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 12:29:10.419301  670144 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 12:29:10.419334  670144 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 12:29:10.525480  670144 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 12:29:10.525516  670144 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 12:29:10.591491  670144 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 12:29:10.591518  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 12:29:10.598636  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 12:29:10.676043  670144 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.174707084s)
	I0923 12:29:10.676099  670144 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.174727254s)
	I0923 12:29:10.676164  670144 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0923 12:29:10.677107  670144 node_ready.go:35] waiting up to 6m0s for node "addons-052630" to be "Ready" ...
	I0923 12:29:10.681243  670144 node_ready.go:49] node "addons-052630" has status "Ready":"True"
	I0923 12:29:10.681278  670144 node_ready.go:38] duration metric: took 4.144676ms for node "addons-052630" to be "Ready" ...
	I0923 12:29:10.681290  670144 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:29:10.697913  670144 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cvw7x" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:10.820653  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 12:29:10.825588  670144 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 12:29:10.825612  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 12:29:11.166886  670144 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 12:29:11.166909  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 12:29:11.180409  670144 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-052630" context rescaled to 1 replicas
	I0923 12:29:11.447351  670144 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 12:29:11.447384  670144 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 12:29:11.721490  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 12:29:12.078341  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.427447212s)
	I0923 12:29:12.078414  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:12.078429  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:12.078443  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.503450542s)
	I0923 12:29:12.078485  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:12.078498  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:12.078823  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:12.078831  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:12.078854  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:12.078856  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:12.078863  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:12.078868  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:12.078871  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:12.078878  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:12.078891  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:12.079227  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:12.079263  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:12.079271  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:12.079315  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:12.079335  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:12.079341  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:12.803456  670144 pod_ready.go:103] pod "coredns-7c65d6cfc9-cvw7x" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:13.600807  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.889878058s)
	I0923 12:29:13.600875  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:13.600825  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.851443065s)
	I0923 12:29:13.600943  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:13.600962  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:13.600888  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:13.600895  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.820571857s)
	I0923 12:29:13.601061  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:13.601070  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:13.601238  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:13.601278  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:13.601285  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:13.601270  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:13.601304  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:13.601315  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:13.601328  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:13.601293  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:13.601389  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:13.601391  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:13.601429  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:13.601437  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:13.601449  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:13.601455  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:13.601954  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:13.602020  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:13.602042  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:13.602063  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:13.602072  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:13.602294  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:13.602306  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:13.603331  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:13.603349  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:13.801670  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:13.801695  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:13.802002  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:13.802041  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	W0923 12:29:13.802159  670144 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0923 12:29:13.880403  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:13.880433  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:13.880754  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:13.880776  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:13.880836  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:14.235264  670144 pod_ready.go:93] pod "coredns-7c65d6cfc9-cvw7x" in "kube-system" namespace has status "Ready":"True"
	I0923 12:29:14.235297  670144 pod_ready.go:82] duration metric: took 3.537339059s for pod "coredns-7c65d6cfc9-cvw7x" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:14.235308  670144 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v7dmc" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:14.291401  670144 pod_ready.go:93] pod "coredns-7c65d6cfc9-v7dmc" in "kube-system" namespace has status "Ready":"True"
	I0923 12:29:14.291428  670144 pod_ready.go:82] duration metric: took 56.113983ms for pod "coredns-7c65d6cfc9-v7dmc" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:14.291438  670144 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-052630" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:15.285912  670144 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 12:29:15.285962  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:15.289442  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:15.289901  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:15.289933  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:15.290206  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:15.290456  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:15.290643  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:15.290816  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:15.584286  670144 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 12:29:15.772056  670144 addons.go:234] Setting addon gcp-auth=true in "addons-052630"
	I0923 12:29:15.772177  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:15.772565  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:15.772604  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:15.789694  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46135
	I0923 12:29:15.790390  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:15.790928  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:15.790953  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:15.791398  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:15.791922  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:15.791974  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:15.808522  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43819
	I0923 12:29:15.809129  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:15.809845  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:15.809875  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:15.810306  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:15.810586  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:15.812642  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:15.812962  670144 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 12:29:15.812999  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:15.816164  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:15.816654  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:15.816681  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:15.816904  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:15.817091  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:15.817236  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:15.817376  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:15.891555  670144 pod_ready.go:93] pod "etcd-addons-052630" in "kube-system" namespace has status "Ready":"True"
	I0923 12:29:15.891581  670144 pod_ready.go:82] duration metric: took 1.60013549s for pod "etcd-addons-052630" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:15.891591  670144 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-052630" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:15.987597  670144 pod_ready.go:93] pod "kube-apiserver-addons-052630" in "kube-system" namespace has status "Ready":"True"
	I0923 12:29:15.987625  670144 pod_ready.go:82] duration metric: took 96.027461ms for pod "kube-apiserver-addons-052630" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:15.987635  670144 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-052630" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:16.145156  670144 pod_ready.go:93] pod "kube-controller-manager-addons-052630" in "kube-system" namespace has status "Ready":"True"
	I0923 12:29:16.145181  670144 pod_ready.go:82] duration metric: took 157.538978ms for pod "kube-controller-manager-addons-052630" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:16.145191  670144 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vn9km" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:16.318509  670144 pod_ready.go:93] pod "kube-proxy-vn9km" in "kube-system" namespace has status "Ready":"True"
	I0923 12:29:16.318542  670144 pod_ready.go:82] duration metric: took 173.342123ms for pod "kube-proxy-vn9km" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:16.318556  670144 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-052630" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:16.367647  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.570518238s)
	I0923 12:29:16.367707  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.548102227s)
	I0923 12:29:16.367717  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.367731  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.367736  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.367751  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.367955  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.158101812s)
	I0923 12:29:16.368015  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.368031  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.368190  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.368220  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.368221  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.368223  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.368320  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.368344  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.368231  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.368372  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.368380  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.368401  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.74898188s)
	I0923 12:29:16.368253  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.368427  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.368432  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.368436  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.368440  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.368446  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.368565  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.769896333s)
	I0923 12:29:16.368589  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.368597  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.368664  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.368679  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.368279  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.368699  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.368353  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.369082  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.369131  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.369155  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.369160  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.369167  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.369173  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.369248  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.369265  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.369295  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.369301  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.369309  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.369315  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.370458  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.370480  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.370493  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.370494  670144 addons.go:475] Verifying addon registry=true in "addons-052630"
	I0923 12:29:16.370783  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.370808  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.370815  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.371296  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.371308  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.371446  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.371466  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.371473  670144 addons.go:475] Verifying addon ingress=true in "addons-052630"
	I0923 12:29:16.372129  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.053719131s)
	I0923 12:29:16.372181  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.372203  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.372468  670144 out.go:177] * Verifying registry addon...
	I0923 12:29:16.372506  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.372533  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.373064  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.373074  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.373084  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.372536  670144 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-052630 service yakd-dashboard -n yakd-dashboard
	
	I0923 12:29:16.373416  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.373455  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.373463  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.373482  670144 addons.go:475] Verifying addon metrics-server=true in "addons-052630"
	I0923 12:29:16.373548  670144 out.go:177] * Verifying ingress addon...
	I0923 12:29:16.376859  670144 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 12:29:16.377235  670144 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 12:29:16.403137  670144 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 12:29:16.403166  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:16.404545  670144 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 12:29:16.404577  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:16.413711  670144 pod_ready.go:93] pod "kube-scheduler-addons-052630" in "kube-system" namespace has status "Ready":"True"
	I0923 12:29:16.413735  670144 pod_ready.go:82] duration metric: took 95.170893ms for pod "kube-scheduler-addons-052630" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:16.413745  670144 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:16.687574  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.866859653s)
	W0923 12:29:16.687654  670144 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 12:29:16.687692  670144 retry.go:31] will retry after 205.184874ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 12:29:16.893570  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 12:29:17.115140  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:17.115729  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:17.396617  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:17.396842  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:17.889967  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:17.890486  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:17.896395  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.174848485s)
	I0923 12:29:17.896449  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:17.896460  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:17.896462  670144 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.083466495s)
	I0923 12:29:17.896747  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:17.896804  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:17.896821  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:17.896830  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:17.897120  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:17.897136  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:17.897147  670144 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-052630"
	I0923 12:29:17.898347  670144 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 12:29:17.898446  670144 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 12:29:17.899858  670144 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 12:29:17.900628  670144 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 12:29:17.901271  670144 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 12:29:17.901295  670144 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 12:29:17.940858  670144 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 12:29:17.940896  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:17.996704  670144 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 12:29:17.996735  670144 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 12:29:18.047586  670144 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 12:29:18.047614  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 12:29:18.096484  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 12:29:18.185732  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.292020776s)
	I0923 12:29:18.185806  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:18.185838  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:18.186138  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:18.186158  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:18.186169  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:18.186177  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:18.186426  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:18.186447  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:18.387863  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:18.388256  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:18.406385  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:18.421720  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:18.882500  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:18.882785  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:18.905191  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:19.387726  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:19.388481  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:19.411200  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:19.581790  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.485262596s)
	I0923 12:29:19.581873  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:19.581891  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:19.582219  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:19.582276  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:19.582301  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:19.582317  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:19.582328  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:19.582590  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:19.582647  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:19.582672  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:19.584672  670144 addons.go:475] Verifying addon gcp-auth=true in "addons-052630"
	I0923 12:29:19.586440  670144 out.go:177] * Verifying gcp-auth addon...
	I0923 12:29:19.589206  670144 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 12:29:19.620640  670144 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 12:29:19.620668  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:19.886738  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:19.890925  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:19.912686  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:20.096746  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:20.392258  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:20.393710  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:20.407449  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:20.593567  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:20.881568  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:20.881815  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:20.905516  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:20.920340  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:21.093740  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:21.384843  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:21.384987  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:21.405282  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:21.592541  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:21.884592  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:21.885028  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:21.908345  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:22.093490  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:22.386941  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:22.387161  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:22.404796  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:22.592403  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:22.881616  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:22.881661  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:22.905343  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:23.093177  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:23.384666  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:23.386163  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:23.426576  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:23.487848  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:23.592494  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:23.882714  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:23.883358  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:23.906870  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:24.092492  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:24.382319  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:24.382983  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:24.407140  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:24.593539  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:24.882594  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:24.883125  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:24.905274  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:25.092842  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:25.382809  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:25.382812  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:25.406742  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:25.593227  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:25.884510  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:25.888982  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:25.905898  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:25.927041  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:26.093083  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:26.381626  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:26.382291  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:26.405944  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:26.592774  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:26.882136  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:26.882387  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:26.904852  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:27.093581  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:27.382186  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:27.382448  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:27.405778  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:27.593357  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:27.884042  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:27.884439  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:27.985517  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:28.092766  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:28.381805  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:28.381982  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:28.405524  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:28.424581  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:28.592693  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:28.882335  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:28.882461  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:28.905150  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:29.093790  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:29.381852  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:29.381930  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:29.406197  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:29.593870  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:29.882541  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:29.882798  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:29.905474  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:30.093606  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:30.382135  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:30.382392  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:30.404887  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:30.592667  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:30.881745  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:30.881985  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:30.907119  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:30.923733  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:31.093218  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:31.381583  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:31.381644  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:31.405219  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:31.593141  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:31.881719  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:31.882449  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:31.905985  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:32.093520  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:32.381819  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:32.382499  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:32.406447  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:32.592822  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:32.883086  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:32.883410  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:32.904975  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:33.093110  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:33.381891  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:33.383762  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:33.407942  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:33.422107  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:33.593115  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:33.881264  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:33.881728  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:33.906608  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:34.093572  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:34.381552  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:34.382128  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:34.405613  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:34.592996  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:34.882206  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:34.882652  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:34.907227  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:35.092746  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:35.381896  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:35.382256  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:35.405744  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:35.593906  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:35.882021  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:35.882250  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:35.905757  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:35.919545  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:36.093133  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:36.381087  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:36.381911  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:36.405918  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:36.593023  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:36.880871  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:36.881484  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:36.905513  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:37.093228  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:37.381359  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:37.382168  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:37.404758  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:37.592991  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:37.883706  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:37.884057  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:37.905951  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:37.921061  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:38.095579  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:38.381352  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:38.382050  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:38.406732  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:38.592418  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:38.882769  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:38.884781  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:38.909673  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:39.092517  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:39.384210  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:39.385066  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:39.405577  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:39.592411  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:39.882233  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:39.882964  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:39.905696  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:39.921969  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:40.092984  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:40.382732  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:40.383202  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:40.405785  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:40.593074  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:40.882030  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:40.882422  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:40.904994  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:41.093877  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:41.383225  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:41.383328  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:41.405996  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:41.593221  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:41.881622  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:41.881736  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:41.905316  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:42.093230  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:42.382510  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:42.382663  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:42.405377  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:42.419518  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:42.592420  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:42.880988  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:42.881203  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:42.906415  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:43.092742  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:43.382514  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:43.383733  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:43.719884  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:43.720755  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:43.888232  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:43.889178  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:43.904914  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:44.094101  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:44.383060  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:44.383829  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:44.405971  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:44.592595  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:44.887366  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:44.887955  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:44.906306  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:44.922735  670144 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"True"
	I0923 12:29:44.922765  670144 pod_ready.go:82] duration metric: took 28.50901084s for pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:44.922773  670144 pod_ready.go:39] duration metric: took 34.241469342s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:29:44.922792  670144 api_server.go:52] waiting for apiserver process to appear ...
	I0923 12:29:44.922851  670144 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:29:44.942826  670144 api_server.go:72] duration metric: took 36.843890873s to wait for apiserver process to appear ...
	I0923 12:29:44.942854  670144 api_server.go:88] waiting for apiserver healthz status ...
	I0923 12:29:44.942876  670144 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I0923 12:29:44.947699  670144 api_server.go:279] https://192.168.39.225:8443/healthz returned 200:
	ok
	I0923 12:29:44.948883  670144 api_server.go:141] control plane version: v1.31.1
	I0923 12:29:44.948908  670144 api_server.go:131] duration metric: took 6.047956ms to wait for apiserver health ...
	I0923 12:29:44.948917  670144 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 12:29:44.958208  670144 system_pods.go:59] 17 kube-system pods found
	I0923 12:29:44.958245  670144 system_pods.go:61] "coredns-7c65d6cfc9-cvw7x" [3de8bd3c-0baf-459b-94f8-f5d52ef1286d] Running
	I0923 12:29:44.958253  670144 system_pods.go:61] "csi-hostpath-attacher-0" [4c3e1f51-c4eb-4fa0-ab09-335efd2aa843] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 12:29:44.958259  670144 system_pods.go:61] "csi-hostpath-resizer-0" [e4676deb-26a8-4a3c-87ac-a226db6563ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 12:29:44.958271  670144 system_pods.go:61] "csi-hostpathplugin-jd2lw" [feb3c94a-858a-4f61-a148-8b64dcfd0934] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 12:29:44.958276  670144 system_pods.go:61] "etcd-addons-052630" [ecb6248b-7e04-4747-946a-eb8fc976147e] Running
	I0923 12:29:44.958280  670144 system_pods.go:61] "kube-apiserver-addons-052630" [578f26c5-733e-4d3b-85da-ecade8aa52dd] Running
	I0923 12:29:44.958284  670144 system_pods.go:61] "kube-controller-manager-addons-052630" [55212af5-b2df-4621-a846-c8912549238d] Running
	I0923 12:29:44.958288  670144 system_pods.go:61] "kube-ingress-dns-minikube" [2187b5c3-511a-4aab-a372-f66d680bbf18] Running
	I0923 12:29:44.958291  670144 system_pods.go:61] "kube-proxy-vn9km" [0e10d00e-8de3-4f7e-ab59-d0f9e93b2f00] Running
	I0923 12:29:44.958295  670144 system_pods.go:61] "kube-scheduler-addons-052630" [a180218d-c5e9-4947-b527-7f9570b9c578] Running
	I0923 12:29:44.958300  670144 system_pods.go:61] "metrics-server-84c5f94fbc-2rhln" [e7c5ceb3-389e-43ff-b807-718f23f12b0f] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 12:29:44.958304  670144 system_pods.go:61] "nvidia-device-plugin-daemonset-fhnrr" [8455a016-6ce8-40d4-bd64-ec3d2e30f774] Running
	I0923 12:29:44.958310  670144 system_pods.go:61] "registry-66c9cd494c-srklj" [ca56f86a-1049-47d9-b11b-9f492f1f0e5a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 12:29:44.958314  670144 system_pods.go:61] "registry-proxy-xmmdr" [cf74bb33-75e5-4844-a3a8-fc698241ea5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 12:29:44.958320  670144 system_pods.go:61] "snapshot-controller-56fcc65765-76p2p" [20745ac3-21a3-45a6-8861-c0ba3567f38a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 12:29:44.958325  670144 system_pods.go:61] "snapshot-controller-56fcc65765-pzghc" [e4692d57-c84d-4bf1-bace-9d6a5a95d95e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 12:29:44.958331  670144 system_pods.go:61] "storage-provisioner" [3bc488f6-aa39-42bc-a0f5-173b2d7e07cf] Running
	I0923 12:29:44.958338  670144 system_pods.go:74] duration metric: took 9.414655ms to wait for pod list to return data ...
	I0923 12:29:44.958347  670144 default_sa.go:34] waiting for default service account to be created ...
	I0923 12:29:44.961083  670144 default_sa.go:45] found service account: "default"
	I0923 12:29:44.961109  670144 default_sa.go:55] duration metric: took 2.755138ms for default service account to be created ...
	I0923 12:29:44.961119  670144 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 12:29:44.967937  670144 system_pods.go:86] 17 kube-system pods found
	I0923 12:29:44.967979  670144 system_pods.go:89] "coredns-7c65d6cfc9-cvw7x" [3de8bd3c-0baf-459b-94f8-f5d52ef1286d] Running
	I0923 12:29:44.967993  670144 system_pods.go:89] "csi-hostpath-attacher-0" [4c3e1f51-c4eb-4fa0-ab09-335efd2aa843] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 12:29:44.968001  670144 system_pods.go:89] "csi-hostpath-resizer-0" [e4676deb-26a8-4a3c-87ac-a226db6563ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 12:29:44.968012  670144 system_pods.go:89] "csi-hostpathplugin-jd2lw" [feb3c94a-858a-4f61-a148-8b64dcfd0934] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 12:29:44.968018  670144 system_pods.go:89] "etcd-addons-052630" [ecb6248b-7e04-4747-946a-eb8fc976147e] Running
	I0923 12:29:44.968024  670144 system_pods.go:89] "kube-apiserver-addons-052630" [578f26c5-733e-4d3b-85da-ecade8aa52dd] Running
	I0923 12:29:44.968029  670144 system_pods.go:89] "kube-controller-manager-addons-052630" [55212af5-b2df-4621-a846-c8912549238d] Running
	I0923 12:29:44.968037  670144 system_pods.go:89] "kube-ingress-dns-minikube" [2187b5c3-511a-4aab-a372-f66d680bbf18] Running
	I0923 12:29:44.968051  670144 system_pods.go:89] "kube-proxy-vn9km" [0e10d00e-8de3-4f7e-ab59-d0f9e93b2f00] Running
	I0923 12:29:44.968057  670144 system_pods.go:89] "kube-scheduler-addons-052630" [a180218d-c5e9-4947-b527-7f9570b9c578] Running
	I0923 12:29:44.968066  670144 system_pods.go:89] "metrics-server-84c5f94fbc-2rhln" [e7c5ceb3-389e-43ff-b807-718f23f12b0f] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 12:29:44.968073  670144 system_pods.go:89] "nvidia-device-plugin-daemonset-fhnrr" [8455a016-6ce8-40d4-bd64-ec3d2e30f774] Running
	I0923 12:29:44.968088  670144 system_pods.go:89] "registry-66c9cd494c-srklj" [ca56f86a-1049-47d9-b11b-9f492f1f0e5a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 12:29:44.968100  670144 system_pods.go:89] "registry-proxy-xmmdr" [cf74bb33-75e5-4844-a3a8-fc698241ea5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 12:29:44.968112  670144 system_pods.go:89] "snapshot-controller-56fcc65765-76p2p" [20745ac3-21a3-45a6-8861-c0ba3567f38a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 12:29:44.968131  670144 system_pods.go:89] "snapshot-controller-56fcc65765-pzghc" [e4692d57-c84d-4bf1-bace-9d6a5a95d95e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 12:29:44.968136  670144 system_pods.go:89] "storage-provisioner" [3bc488f6-aa39-42bc-a0f5-173b2d7e07cf] Running
	I0923 12:29:44.968149  670144 system_pods.go:126] duration metric: took 7.021444ms to wait for k8s-apps to be running ...
	I0923 12:29:44.968165  670144 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 12:29:44.968233  670144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:29:44.984699  670144 system_svc.go:56] duration metric: took 16.527101ms WaitForService to wait for kubelet
	I0923 12:29:44.984736  670144 kubeadm.go:582] duration metric: took 36.885810437s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:29:44.984757  670144 node_conditions.go:102] verifying NodePressure condition ...
	I0923 12:29:44.987925  670144 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:29:44.987958  670144 node_conditions.go:123] node cpu capacity is 2
	I0923 12:29:44.987971  670144 node_conditions.go:105] duration metric: took 3.209178ms to run NodePressure ...
	I0923 12:29:44.987984  670144 start.go:241] waiting for startup goroutines ...
	I0923 12:29:45.092993  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:45.381916  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:45.382878  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:45.405371  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:45.592889  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:45.882961  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:45.882986  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:45.905772  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:46.094099  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:46.381480  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:46.381480  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:46.405345  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:46.593680  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:46.881522  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:46.881585  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:46.907463  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:47.092649  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:47.381289  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:47.382803  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:47.404633  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:47.593242  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:47.881017  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:47.881741  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:47.905476  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:48.094283  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:48.381287  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:48.381678  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:48.404848  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:48.593290  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:49.182575  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:49.182862  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:49.183278  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:49.183600  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:49.387493  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:49.387949  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:49.409172  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:49.593041  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:49.881864  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:49.882012  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:49.905486  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:50.093223  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:50.381524  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:50.381911  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:50.405382  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:50.593121  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:50.882078  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:50.882130  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:50.904664  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:51.094395  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:51.381785  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:51.382965  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:51.404814  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:51.593466  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:51.881718  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:51.882182  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:51.906271  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:52.093535  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:52.381560  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:52.382447  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:52.483055  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:52.592715  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:52.882614  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:52.882831  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:52.905337  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:53.099377  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:53.382358  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:53.382434  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:53.405014  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:53.593255  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:53.881701  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:53.882109  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:53.905214  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:54.093317  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:54.381400  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:54.381756  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:54.405603  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:54.593298  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:54.881505  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:54.882280  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:54.905352  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:55.096080  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:55.381500  670144 kapi.go:107] duration metric: took 39.004256174s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 12:29:55.382262  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:55.407177  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:55.593060  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:55.881873  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:55.906292  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:56.095168  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:56.467534  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:56.467800  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:56.593413  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:56.881611  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:56.905852  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:57.093199  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:57.380555  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:57.407044  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:57.821632  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:57.881537  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:57.906086  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:58.093251  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:58.381225  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:58.405370  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:58.592999  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:58.882363  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:58.905848  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:59.092799  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:59.381850  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:59.405243  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:59.592647  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:59.883180  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:59.905462  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:00.093783  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:00.381525  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:00.405496  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:00.593067  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:00.882096  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:00.905415  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:01.093248  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:01.381090  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:01.404657  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:01.592915  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:01.881472  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:01.904650  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:02.094989  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:02.381519  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:02.482813  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:02.592969  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:02.881994  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:02.905592  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:03.092833  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:03.382442  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:03.737000  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:03.737731  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:03.881239  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:03.908549  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:04.092952  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:04.382596  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:04.406348  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:04.592523  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:04.882260  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:04.906335  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:05.093281  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:05.381532  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:05.404962  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:05.593867  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:05.881533  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:05.905611  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:06.092910  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:06.382350  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:06.405359  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:06.592970  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:06.881573  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:06.905700  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:07.093261  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:07.383765  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:07.406221  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:07.593359  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:07.881515  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:07.905283  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:08.094381  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:08.436545  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:08.437214  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:08.595352  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:08.881471  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:08.904728  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:09.094082  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:09.382329  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:09.418347  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:09.592417  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:09.882579  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:09.905086  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:10.093585  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:10.381916  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:10.408107  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:10.593205  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:10.881583  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:10.906213  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:11.092377  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:11.381528  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:11.405175  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:11.593188  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:11.881123  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:11.906575  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:12.093361  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:12.381510  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:12.418229  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:12.594390  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:12.883421  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:12.905655  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:13.093231  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:13.380738  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:13.409871  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:13.592706  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:13.881963  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:13.906221  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:14.092914  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:14.382057  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:14.405898  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:14.593405  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:14.883241  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:14.905532  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:15.092900  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:15.381659  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:15.404674  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:15.595837  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:15.884204  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:15.906723  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:16.096714  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:16.398360  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:16.492006  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:16.593666  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:16.886491  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:16.907334  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:17.105994  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:17.383325  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:17.406532  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:17.592593  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:17.881884  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:17.906107  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:18.098950  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:18.382178  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:18.406919  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:18.593795  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:18.881986  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:18.907032  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:19.093203  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:19.385652  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:19.486193  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:19.593670  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:20.158045  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:20.160442  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:20.160600  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:20.381193  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:20.406353  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:20.592767  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:20.881653  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:20.906233  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:21.092756  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:21.381504  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:21.404711  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:21.593682  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:21.882663  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:21.905651  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:22.094019  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:22.381116  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:22.482594  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:22.593429  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:22.882120  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:22.907262  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:23.093012  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:23.381337  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:23.416798  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:23.605942  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:23.883914  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:23.905484  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:24.092422  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:24.382490  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:24.404543  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:24.593615  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:24.882704  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:24.905157  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:25.092234  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:25.381913  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:25.406353  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:25.593550  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:25.881420  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:25.905759  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:26.092760  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:26.382791  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:26.404663  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:26.593511  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:26.881695  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:26.906109  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:27.092908  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:27.381352  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:27.405542  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:27.593292  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:27.881677  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:27.905877  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:28.093483  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:28.381903  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:28.405916  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:28.596909  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:28.883234  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:28.907825  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:29.093630  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:29.384206  670144 kapi.go:107] duration metric: took 1m13.007346283s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 12:30:29.408031  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:29.593154  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:29.905366  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:30.096542  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:30.407476  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:30.593391  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:30.905711  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:31.093234  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:31.406100  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:31.593583  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:31.905683  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:32.093451  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:32.405762  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:32.593457  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:32.906615  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:33.092949  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:33.405990  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:33.593662  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:33.908125  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:34.095552  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:34.410315  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:34.593641  670144 kapi.go:107] duration metric: took 1m15.004433334s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 12:30:34.596145  670144 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-052630 cluster.
	I0923 12:30:34.597867  670144 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 12:30:34.599357  670144 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 12:30:34.905455  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:35.406462  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:35.906240  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:36.408440  670144 kapi.go:107] duration metric: took 1m18.507800959s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 12:30:36.410763  670144 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, default-storageclass, ingress-dns, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0923 12:30:36.412731  670144 addons.go:510] duration metric: took 1m28.313766491s for enable addons: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin default-storageclass ingress-dns inspektor-gadget metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0923 12:30:36.412794  670144 start.go:246] waiting for cluster config update ...
	I0923 12:30:36.412829  670144 start.go:255] writing updated cluster config ...
	I0923 12:30:36.413342  670144 ssh_runner.go:195] Run: rm -f paused
	I0923 12:30:36.467246  670144 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 12:30:36.469473  670144 out.go:177] * Done! kubectl is now configured to use "addons-052630" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 23 12:39:53 addons-052630 crio[664]: time="2024-09-23 12:39:53.664540238Z" level=debug msg="running conmon: /usr/libexec/crio/conmon" args="[-b /var/run/containers/storage/overlay-containers/1d2dd88253122700fc2f43dd47cd63ecfa3a81564def9ee90027d069da17039d/userdata -c 1d2dd88253122700fc2f43dd47cd63ecfa3a81564def9ee90027d069da17039d --exit-dir /var/run/crio/exits -l /var/log/pods/local-path-storage_helper-pod-create-pvc-5738aee6-f638-4bad-bf82-f8a96b05fb86_b0f85541-af1e-4f48-aef6-efce33d0d46e/helper-pod/0.log --log-level debug -n k8s_helper-pod_helper-pod-create-pvc-5738aee6-f638-4bad-bf82-f8a96b05fb86_local-path-storage_b0f85541-af1e-4f48-aef6-efce33d0d46e_0 -P /var/run/containers/storage/overlay-containers/1d2dd88253122700fc2f43dd47cd63ecfa3a81564def9ee90027d069da17039d/userdata/conmon-pidfile -p /var/run/containers/storage/overlay-containers/1d2dd88253122700fc2f43dd47cd63ecfa3a81564def9ee90027d069da17039d/userdata/pidfile --persist-dir /var/lib/containers/storage/overlay-containers/1d2dd88253122
700fc2f43dd47cd63ecfa3a81564def9ee90027d069da17039d/userdata -r /usr/bin/runc --runtime-arg --root=/run/runc --socket-dir-path /var/run/crio --syslog -u 1d2dd88253122700fc2f43dd47cd63ecfa3a81564def9ee90027d069da17039d]" file="oci/runtime_oci.go:168" id=0dd2289b-8fe6-4d2b-9b4b-66f1786183b1 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 23 12:39:53 addons-052630 conmon[9569]: conmon 1d2dd88253122700fc2f <ndebug>: addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/12/attach}
	Sep 23 12:39:53 addons-052630 conmon[9569]: conmon 1d2dd88253122700fc2f <ndebug>: terminal_ctrl_fd: 12
	Sep 23 12:39:53 addons-052630 conmon[9569]: conmon 1d2dd88253122700fc2f <ndebug>: winsz read side: 16, winsz write side: 16
	Sep 23 12:39:53 addons-052630 conmon[9569]: conmon 1d2dd88253122700fc2f <ndebug>: container PID: 9584
	Sep 23 12:39:53 addons-052630 crio[664]: time="2024-09-23 12:39:53.710970349Z" level=debug msg="Received container pid: 9584" file="oci/runtime_oci.go:284" id=0dd2289b-8fe6-4d2b-9b4b-66f1786183b1 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 23 12:39:53 addons-052630 crio[664]: time="2024-09-23 12:39:53.725493035Z" level=info msg="Created container 1d2dd88253122700fc2f43dd47cd63ecfa3a81564def9ee90027d069da17039d: local-path-storage/helper-pod-create-pvc-5738aee6-f638-4bad-bf82-f8a96b05fb86/helper-pod" file="server/container_create.go:491" id=0dd2289b-8fe6-4d2b-9b4b-66f1786183b1 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 23 12:39:53 addons-052630 crio[664]: time="2024-09-23 12:39:53.725604030Z" level=debug msg="Response: &CreateContainerResponse{ContainerId:1d2dd88253122700fc2f43dd47cd63ecfa3a81564def9ee90027d069da17039d,}" file="otel-collector/interceptors.go:74" id=0dd2289b-8fe6-4d2b-9b4b-66f1786183b1 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 23 12:39:53 addons-052630 crio[664]: time="2024-09-23 12:39:53.727176758Z" level=debug msg="Request: &StartContainerRequest{ContainerId:1d2dd88253122700fc2f43dd47cd63ecfa3a81564def9ee90027d069da17039d,}" file="otel-collector/interceptors.go:62" id=0fc08f70-9880-4c74-976c-686e3186d347 name=/runtime.v1.RuntimeService/StartContainer
	Sep 23 12:39:53 addons-052630 crio[664]: time="2024-09-23 12:39:53.727325210Z" level=info msg="Starting container: 1d2dd88253122700fc2f43dd47cd63ecfa3a81564def9ee90027d069da17039d" file="server/container_start.go:21" id=0fc08f70-9880-4c74-976c-686e3186d347 name=/runtime.v1.RuntimeService/StartContainer
	Sep 23 12:39:53 addons-052630 crio[664]: time="2024-09-23 12:39:53.728738124Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c4b2c097-b5f0-4223-9dc8-94fdae3a91e4 name=/runtime.v1.RuntimeService/Version
	Sep 23 12:39:53 addons-052630 crio[664]: time="2024-09-23 12:39:53.728795726Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c4b2c097-b5f0-4223-9dc8-94fdae3a91e4 name=/runtime.v1.RuntimeService/Version
	Sep 23 12:39:53 addons-052630 crio[664]: time="2024-09-23 12:39:53.731642687Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e659290c-dee2-45d9-b3e9-8f68a9972b9a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 12:39:53 addons-052630 crio[664]: time="2024-09-23 12:39:53.732715742Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727095193732685277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533239,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e659290c-dee2-45d9-b3e9-8f68a9972b9a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 12:39:53 addons-052630 crio[664]: time="2024-09-23 12:39:53.733339320Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22968e83-8560-4d45-a1e6-bccbe84010de name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 12:39:53 addons-052630 crio[664]: time="2024-09-23 12:39:53.733392632Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22968e83-8560-4d45-a1e6-bccbe84010de name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 12:39:53 addons-052630 crio[664]: time="2024-09-23 12:39:53.733752413Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d2dd88253122700fc2f43dd47cd63ecfa3a81564def9ee90027d069da17039d,PodSandboxId:8e47dd796ef4adf2000669db279a142637692e99446ab8e920d5080ff673b412,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_CREATED,CreatedAt:1727095193662857989,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-5738aee6-f638-4bad-bf82-f8a96b05fb86,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b0f85541-af1e-4f48-aef6-efce33d0d46e,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd658c0598e6c49415ca300ec19c8efc652697d90ca659d5332bd0cc8f9da0ce,PodSandboxId:e9d41568c174048781bd2e547ce07b9b7f13bd648556c363403a06a7374416ad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727095155775653048,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 487480e4-f024-4e3c-9c18-a9aabd6129fb,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.port
s: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c427e0695fa7dfe118179b0685857c7d96bbed4dca69a80b42715eb28daf3f3,PodSandboxId:e0f536b5e92b1765bbec31f330b1cbfc55061818c897748a2f248d41719fbcd7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727094633948657283,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-gzksd,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 1b75c160-3198-402b-b135-861e77ac4482,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df9183c228f7fc4dfd7472a4d3a1ca2afa4a41ea61b590a672d768ae47ee6707,PodSandboxId:c4069b80c396de9a62d3df227c6664528fa5f267e7da5ca5196435b19c8408c8,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1727094628449458669,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff
-s4nj8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6d098fb8-3ff9-4429-a01a-80cb0eabbfce,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9e27a8d8fca2436dbc6c6a61141fca32d7ee57899f062b30fe7985c09af2497d,PodSandboxId:ed4a201ebc8ba0f30f371834b83f2c66afb4f5882ee634a4917eace4fc0240ca,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha25
6:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727094612554187733,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rt72w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: be51de49-d024-4957-aa1c-cca98b0f88cd,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3acec0e873b6d270ce4f017668c10ddb9b853ceecdb55fa8e1c753abc4b762d,PodSandboxId:1c884f88ba6db8f1319071f0e2d608c1dfa5e0c14427ad8c874c2031e7a816cb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingr
ess-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727094612406967962,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-d2m8p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ea774ad2-860f-4e87-b48c-369cdc2dd298,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50f1ae050ce475e5a505a980ea72122b45036c60002591f0381f922671fc411a,PodSandboxId:17d85166b8277c2a9faa6b4607652c23931a05692eb0e979f495fa4c4552c2f9,Metadata:&ContainerMetadata{Name:local-path-provisioner,A
ttempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1727094606636049364,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-snqv8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 43c09017-cfad-4a08-b73c-bfba508afe73,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c562b0f0db97e156094a95051c2918c843a67120a3fad3a0ed62f76e4bdd99,PodSandboxId:812e276794e6a36a0f784df82410b65dd952a835961a065190a38
788b9decbf3,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed5296669595f8a7b2d79ad0cd8e193bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2ebbaeeba1bd01a80097b8a834ff2a86498d89f3ea11470c0f0ba298931b7cb,State:CONTAINER_RUNNING,CreatedAt:1727094574078088354,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5b584cc74-2tf2f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7bfaa6dc-7b6d-496c-8757-ef15d11690c4,},Annotations:map[string]string{io.kubernetes.container.hash: fda6bb5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminatio
nMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34581f4844950b97c13f86aaaeaa7e10c5234c24af43cc30c33ba88e6862327a,PodSandboxId:562120c3b3394dd00bd40c1ca7c77e1e06183f98731c21f2784e0d126a02e2b8,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1727094567687989900,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2187b5c3-511a-4aab-a372-f66d680bbf18,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kub
ernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c2b9200f7a37ef1e8ff5b91ed0bd719859f18fd8e04d31045255bb46a563b5,PodSandboxId:dfa6385e052b942da39e7f1efb907744acba0e7c89c40514021b4c90d419d7bc,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727094558710109886,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2rhln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7c5ceb3-389e-43ff-b807-718f23f12b0f,},Annotations:map[string]string{io
.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58bbd55bde08fee5d7aeb446829fa511ea633c6594a8f94dbc19f40954380b59,PodSandboxId:7fc2b63648c6ce7f74862f514ca11336f589ba36807a84f82b5fe966e703bba1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727094554932322734,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: 3bc488f6-aa39-42bc-a0f5-173b2d7e07cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2700e6a975e0821a451d1a3a41fc665ed1652d4380515018e498434fe7a5a0ff,PodSandboxId:f5725c70d12571297f1fbc08fcf7c6634ea79b711270178cb2861d7a021f4a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727094551725672407,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cvw7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3de8bd3c-0baf-45
9b-94f8-f5d52ef1286d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f2e68fe054158153cd0c8a69f419c5737179e35fdb015065c2b0c5026242a00,PodSandboxId:d54027fa53db00e856f587b7398dfbee79868ce10d8c9bc030a174a635717867,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:17270945490162
00714,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vn9km,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e10d00e-8de3-4f7e-ab59-d0f9e93b2f00,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d98809372a261156c26bb6e7875a9195290bc295be13167b14faf4bcfd7ac5a,PodSandboxId:1a45969da935e2684242fa5b07b35eaa8001d3fe9d4867c4f31f2152672a0eea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727094538170986390,Labels:map[string]strin
g{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd793e50c81059d44a1e6fde8a448895,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:137997c74feadea0b206e40066df0bab268bc86a43379e84dcea2cf1d5c37c85,PodSandboxId:8618182b0365790203283b2a6cd2de064a98724d33806cc9f4eedfc629ad8516,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727094538165838825,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7efdfb9180b7292c18423e02021138d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84885d234fc5d6c19b12360b7a7ed082cccb20946dcbedee5d7e8756cd36ffb0,PodSandboxId:2f48abf774e208d8f1e5e0d05f63bfa69400ab9e4bb0147be37e97f07eed1343,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727094538113594059,Labels:map[string]string{io.kubernetes.c
ontainer.name: etcd,io.kubernetes.pod.name: etcd-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1947c799ac122c11eb2c15f2bc9fdc08,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b706da2e61377c7ed468c79a4331b242c0011823c88614c8bc039cc285976d81,PodSandboxId:a16e26d2dc6966551d559c1a5d3db6a99724044ad4418a767d04c065c600a61d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727094538130237781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kube
rnetes.pod.name: kube-apiserver-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c71f38e20d8cf8d860ac88cdd9241f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22968e83-8560-4d45-a1e6-bccbe84010de name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 12:39:53 addons-052630 crio[664]: time="2024-09-23 12:39:53.743130908Z" level=info msg="Started container" PID=9584 containerID=1d2dd88253122700fc2f43dd47cd63ecfa3a81564def9ee90027d069da17039d description=local-path-storage/helper-pod-create-pvc-5738aee6-f638-4bad-bf82-f8a96b05fb86/helper-pod file="server/container_start.go:115" id=0fc08f70-9880-4c74-976c-686e3186d347 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8e47dd796ef4adf2000669db279a142637692e99446ab8e920d5080ff673b412
	Sep 23 12:39:53 addons-052630 crio[664]: time="2024-09-23 12:39:53.747654145Z" level=debug msg="Event: WRITE         \"/var/run/crio/exits/1d2dd88253122700fc2f43dd47cd63ecfa3a81564def9ee90027d069da17039d.2O2VU2\"" file="server/server.go:805"
	Sep 23 12:39:53 addons-052630 crio[664]: time="2024-09-23 12:39:53.747771922Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/1d2dd88253122700fc2f43dd47cd63ecfa3a81564def9ee90027d069da17039d.2O2VU2\"" file="server/server.go:805"
	Sep 23 12:39:53 addons-052630 crio[664]: time="2024-09-23 12:39:53.747808706Z" level=debug msg="Container or sandbox exited: 1d2dd88253122700fc2f43dd47cd63ecfa3a81564def9ee90027d069da17039d.2O2VU2" file="server/server.go:810"
	Sep 23 12:39:53 addons-052630 crio[664]: time="2024-09-23 12:39:53.747850654Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/1d2dd88253122700fc2f43dd47cd63ecfa3a81564def9ee90027d069da17039d\"" file="server/server.go:805"
	Sep 23 12:39:53 addons-052630 crio[664]: time="2024-09-23 12:39:53.747897402Z" level=debug msg="Container or sandbox exited: 1d2dd88253122700fc2f43dd47cd63ecfa3a81564def9ee90027d069da17039d" file="server/server.go:810"
	Sep 23 12:39:53 addons-052630 crio[664]: time="2024-09-23 12:39:53.747939823Z" level=debug msg="container exited and found: 1d2dd88253122700fc2f43dd47cd63ecfa3a81564def9ee90027d069da17039d" file="server/server.go:825"
	Sep 23 12:39:53 addons-052630 crio[664]: time="2024-09-23 12:39:53.748148381Z" level=debug msg="Event: RENAME        \"/var/run/crio/exits/1d2dd88253122700fc2f43dd47cd63ecfa3a81564def9ee90027d069da17039d.2O2VU2\"" file="server/server.go:805"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	1d2dd88253122       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                            Less than a second ago   Exited              helper-pod                0                   8e47dd796ef4a       helper-pod-create-pvc-5738aee6-f638-4bad-bf82-f8a96b05fb86
	dd658c0598e6c       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              38 seconds ago           Running             nginx                     0                   e9d41568c1740       nginx
	4c427e0695fa7       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 9 minutes ago            Running             gcp-auth                  0                   e0f536b5e92b1       gcp-auth-89d5ffd79-gzksd
	df9183c228f7f       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             9 minutes ago            Running             controller                0                   c4069b80c396d       ingress-nginx-controller-bc57996ff-s4nj8
	9e27a8d8fca24       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   9 minutes ago            Exited              patch                     0                   ed4a201ebc8ba       ingress-nginx-admission-patch-rt72w
	b3acec0e873b6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   9 minutes ago            Exited              create                    0                   1c884f88ba6db       ingress-nginx-admission-create-d2m8p
	50f1ae050ce47       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             9 minutes ago            Running             local-path-provisioner    0                   17d85166b8277       local-path-provisioner-86d989889c-snqv8
	a0c562b0f0db9       gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed5296669595f8a7b2d79ad0cd8e193bf               10 minutes ago           Running             cloud-spanner-emulator    0                   812e276794e6a       cloud-spanner-emulator-5b584cc74-2tf2f
	34581f4844950       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             10 minutes ago           Running             minikube-ingress-dns      0                   562120c3b3394       kube-ingress-dns-minikube
	54c2b9200f7a3       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        10 minutes ago           Running             metrics-server            0                   dfa6385e052b9       metrics-server-84c5f94fbc-2rhln
	58bbd55bde08f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             10 minutes ago           Running             storage-provisioner       0                   7fc2b63648c6c       storage-provisioner
	2700e6a975e08       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             10 minutes ago           Running             coredns                   0                   f5725c70d1257       coredns-7c65d6cfc9-cvw7x
	4f2e68fe05415       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             10 minutes ago           Running             kube-proxy                0                   d54027fa53db0       kube-proxy-vn9km
	2d98809372a26       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             10 minutes ago           Running             kube-scheduler            0                   1a45969da935e       kube-scheduler-addons-052630
	137997c74fead       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             10 minutes ago           Running             kube-controller-manager   0                   8618182b03657       kube-controller-manager-addons-052630
	b706da2e61377       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             10 minutes ago           Running             kube-apiserver            0                   a16e26d2dc696       kube-apiserver-addons-052630
	84885d234fc5d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             10 minutes ago           Running             etcd                      0                   2f48abf774e20       etcd-addons-052630
	
	
	==> coredns [2700e6a975e0821a451d1a3a41fc665ed1652d4380515018e498434fe7a5a0ff] <==
	[INFO] 10.244.0.7:59787 - 46467 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000082041s
	[INFO] 10.244.0.21:50719 - 3578 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000678697s
	[INFO] 10.244.0.21:59846 - 36057 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000185909s
	[INFO] 10.244.0.21:51800 - 41027 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000131443s
	[INFO] 10.244.0.21:60988 - 60393 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000092533s
	[INFO] 10.244.0.21:37198 - 50317 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000088047s
	[INFO] 10.244.0.21:53871 - 9639 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000076299s
	[INFO] 10.244.0.21:35205 - 14039 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.004685857s
	[INFO] 10.244.0.21:34331 - 9494 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00457672s
	[INFO] 10.244.0.7:43442 - 53421 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000388692s
	[INFO] 10.244.0.7:43442 - 62888 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000079319s
	[INFO] 10.244.0.7:55893 - 18422 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000147084s
	[INFO] 10.244.0.7:55893 - 9973 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000095576s
	[INFO] 10.244.0.7:47983 - 23764 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000188893s
	[INFO] 10.244.0.7:47983 - 4566 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000115139s
	[INFO] 10.244.0.7:50253 - 35636 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000151794s
	[INFO] 10.244.0.7:50253 - 39730 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000122834s
	[INFO] 10.244.0.7:52374 - 7303 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000165376s
	[INFO] 10.244.0.7:52374 - 65467 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000108039s
	[INFO] 10.244.0.7:38944 - 938 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000084751s
	[INFO] 10.244.0.7:38944 - 32437 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000074543s
	[INFO] 10.244.0.7:35936 - 54263 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000055079s
	[INFO] 10.244.0.7:35936 - 63221 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000100045s
	[INFO] 10.244.0.7:58342 - 30223 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00010406s
	[INFO] 10.244.0.7:58342 - 58610 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00006497s
	
	
	==> describe nodes <==
	Name:               addons-052630
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-052630
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=addons-052630
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T12_29_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-052630
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:29:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-052630
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 12:39:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 12:39:36 +0000   Mon, 23 Sep 2024 12:28:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 12:39:36 +0000   Mon, 23 Sep 2024 12:28:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 12:39:36 +0000   Mon, 23 Sep 2024 12:28:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 12:39:36 +0000   Mon, 23 Sep 2024 12:29:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    addons-052630
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 46d8dccd290a43399ed351791d0287b7
	  System UUID:                46d8dccd-290a-4339-9ed3-51791d0287b7
	  Boot ID:                    aef77f72-28ae-4358-8b71-243c7f96a73e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	  default                     cloud-spanner-emulator-5b584cc74-2tf2f                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  gcp-auth                    gcp-auth-89d5ffd79-gzksd                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-s4nj8                      100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-cvw7x                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 etcd-addons-052630                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-052630                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-052630                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-vn9km                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-052630                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-84c5f94fbc-2rhln                               100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         10m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  local-path-storage          helper-pod-create-pvc-5738aee6-f638-4bad-bf82-f8a96b05fb86    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  local-path-storage          local-path-provisioner-86d989889c-snqv8                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node addons-052630 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node addons-052630 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node addons-052630 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node addons-052630 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node addons-052630 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet          Node addons-052630 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10m                kubelet          Node addons-052630 status is now: NodeReady
	  Normal  RegisteredNode           10m                node-controller  Node addons-052630 event: Registered Node addons-052630 in Controller
	
	
	==> dmesg <==
	[  +0.087047] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.291784] systemd-fstab-generator[1335]: Ignoring "noauto" option for root device
	[  +0.140187] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.042931] kauditd_printk_skb: 120 callbacks suppressed
	[  +5.024950] kauditd_printk_skb: 96 callbacks suppressed
	[  +9.303213] kauditd_printk_skb: 112 callbacks suppressed
	[ +30.702636] kauditd_printk_skb: 2 callbacks suppressed
	[Sep23 12:30] kauditd_printk_skb: 27 callbacks suppressed
	[  +6.339998] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.675662] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.264623] kauditd_printk_skb: 74 callbacks suppressed
	[  +7.313349] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.509035] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.134771] kauditd_printk_skb: 52 callbacks suppressed
	[Sep23 12:31] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 12:33] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 12:36] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 12:38] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.581855] kauditd_printk_skb: 6 callbacks suppressed
	[Sep23 12:39] kauditd_printk_skb: 26 callbacks suppressed
	[ +14.124700] kauditd_printk_skb: 14 callbacks suppressed
	[  +8.773860] kauditd_printk_skb: 14 callbacks suppressed
	[  +8.300408] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.891663] kauditd_printk_skb: 6 callbacks suppressed
	[ +11.246442] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [84885d234fc5d6c19b12360b7a7ed082cccb20946dcbedee5d7e8756cd36ffb0] <==
	{"level":"warn","ts":"2024-09-23T12:38:45.585401Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.347829ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T12:38:45.585825Z","caller":"traceutil/trace.go:171","msg":"trace[689683591] linearizableReadLoop","detail":"{readStateIndex:2107; appliedIndex:2106; }","duration":"240.076181ms","start":"2024-09-23T12:38:45.345716Z","end":"2024-09-23T12:38:45.585792Z","steps":["trace[689683591] 'read index received'  (duration: 239.056299ms)","trace[689683591] 'applied index is now lower than readState.Index'  (duration: 1.019485ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T12:38:45.585991Z","caller":"traceutil/trace.go:171","msg":"trace[1433974130] transaction","detail":"{read_only:false; response_revision:1969; number_of_response:1; }","duration":"412.405735ms","start":"2024-09-23T12:38:45.173572Z","end":"2024-09-23T12:38:45.585978Z","steps":["trace[1433974130] 'process raft request'  (duration: 411.242557ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:38:45.586153Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T12:38:45.173553Z","time spent":"412.503245ms","remote":"127.0.0.1:41198","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":540,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-052630\" mod_revision:1922 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-052630\" value_size:486 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-052630\" > >"}
	{"level":"warn","ts":"2024-09-23T12:38:45.586522Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.799311ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T12:38:45.586554Z","caller":"traceutil/trace.go:171","msg":"trace[1061360463] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1969; }","duration":"240.827325ms","start":"2024-09-23T12:38:45.345712Z","end":"2024-09-23T12:38:45.586540Z","steps":["trace[1061360463] 'agreement among raft nodes before linearized reading'  (duration: 240.547514ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:38:45.586713Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.275793ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T12:38:45.586729Z","caller":"traceutil/trace.go:171","msg":"trace[1600622772] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1969; }","duration":"181.293593ms","start":"2024-09-23T12:38:45.405431Z","end":"2024-09-23T12:38:45.586724Z","steps":["trace[1600622772] 'agreement among raft nodes before linearized reading'  (duration: 181.261953ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:38:45.586889Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.90923ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T12:38:45.586903Z","caller":"traceutil/trace.go:171","msg":"trace[43504617] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1969; }","duration":"108.925213ms","start":"2024-09-23T12:38:45.477974Z","end":"2024-09-23T12:38:45.586899Z","steps":["trace[43504617] 'agreement among raft nodes before linearized reading'  (duration: 108.900464ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:38:45.586971Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.015116ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T12:38:45.586992Z","caller":"traceutil/trace.go:171","msg":"trace[1522914426] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1969; }","duration":"109.03651ms","start":"2024-09-23T12:38:45.477951Z","end":"2024-09-23T12:38:45.586988Z","steps":["trace[1522914426] 'agreement among raft nodes before linearized reading'  (duration: 109.008631ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:38:45.587155Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.402947ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-09-23T12:38:45.587172Z","caller":"traceutil/trace.go:171","msg":"trace[1003053304] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1969; }","duration":"122.420273ms","start":"2024-09-23T12:38:45.464747Z","end":"2024-09-23T12:38:45.587167Z","steps":["trace[1003053304] 'agreement among raft nodes before linearized reading'  (duration: 122.358904ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T12:38:45.588792Z","caller":"traceutil/trace.go:171","msg":"trace[1914231593] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1968; }","duration":"223.550909ms","start":"2024-09-23T12:38:45.361971Z","end":"2024-09-23T12:38:45.585522Z","steps":["trace[1914231593] 'range keys from in-memory index tree'  (duration: 223.329199ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T12:38:59.411855Z","caller":"traceutil/trace.go:171","msg":"trace[1835850910] transaction","detail":"{read_only:false; response_revision:2049; number_of_response:1; }","duration":"277.964156ms","start":"2024-09-23T12:38:59.133873Z","end":"2024-09-23T12:38:59.411837Z","steps":["trace[1835850910] 'process raft request'  (duration: 277.797273ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T12:38:59.412118Z","caller":"traceutil/trace.go:171","msg":"trace[494165466] linearizableReadLoop","detail":"{readStateIndex:2191; appliedIndex:2191; }","duration":"230.364595ms","start":"2024-09-23T12:38:59.181745Z","end":"2024-09-23T12:38:59.412110Z","steps":["trace[494165466] 'read index received'  (duration: 230.361284ms)","trace[494165466] 'applied index is now lower than readState.Index'  (duration: 2.661µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T12:38:59.412326Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.027808ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-23T12:38:59.412352Z","caller":"traceutil/trace.go:171","msg":"trace[1017305449] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:2049; }","duration":"166.068608ms","start":"2024-09-23T12:38:59.246275Z","end":"2024-09-23T12:38:59.412343Z","steps":["trace[1017305449] 'agreement among raft nodes before linearized reading'  (duration: 165.97691ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:38:59.412565Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"230.833337ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1114"}
	{"level":"info","ts":"2024-09-23T12:38:59.412600Z","caller":"traceutil/trace.go:171","msg":"trace[1433149078] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2049; }","duration":"230.871055ms","start":"2024-09-23T12:38:59.181723Z","end":"2024-09-23T12:38:59.412594Z","steps":["trace[1433149078] 'agreement among raft nodes before linearized reading'  (duration: 230.777381ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T12:38:59.490314Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1537}
	{"level":"info","ts":"2024-09-23T12:38:59.546892Z","caller":"traceutil/trace.go:171","msg":"trace[1169948736] transaction","detail":"{read_only:false; response_revision:2050; number_of_response:1; }","duration":"130.033368ms","start":"2024-09-23T12:38:59.416838Z","end":"2024-09-23T12:38:59.546872Z","steps":["trace[1169948736] 'process raft request'  (duration: 74.021052ms)","trace[1169948736] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; req_size:1095; } (duration: 55.627555ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T12:38:59.562704Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1537,"took":"71.895193ms","hash":851007697,"current-db-size-bytes":6762496,"current-db-size":"6.8 MB","current-db-size-in-use-bytes":3760128,"current-db-size-in-use":"3.8 MB"}
	{"level":"info","ts":"2024-09-23T12:38:59.562759Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":851007697,"revision":1537,"compact-revision":-1}
	
	
	==> gcp-auth [4c427e0695fa7dfe118179b0685857c7d96bbed4dca69a80b42715eb28daf3f3] <==
	2024/09/23 12:30:34 GCP Auth Webhook started!
	2024/09/23 12:30:36 Ready to marshal response ...
	2024/09/23 12:30:36 Ready to write response ...
	2024/09/23 12:30:36 Ready to marshal response ...
	2024/09/23 12:30:36 Ready to write response ...
	2024/09/23 12:30:36 Ready to marshal response ...
	2024/09/23 12:30:36 Ready to write response ...
	2024/09/23 12:38:40 Ready to marshal response ...
	2024/09/23 12:38:40 Ready to write response ...
	2024/09/23 12:38:40 Ready to marshal response ...
	2024/09/23 12:38:40 Ready to write response ...
	2024/09/23 12:38:40 Ready to marshal response ...
	2024/09/23 12:38:40 Ready to write response ...
	2024/09/23 12:38:51 Ready to marshal response ...
	2024/09/23 12:38:51 Ready to write response ...
	2024/09/23 12:38:54 Ready to marshal response ...
	2024/09/23 12:38:54 Ready to write response ...
	2024/09/23 12:39:10 Ready to marshal response ...
	2024/09/23 12:39:10 Ready to write response ...
	2024/09/23 12:39:17 Ready to marshal response ...
	2024/09/23 12:39:17 Ready to write response ...
	2024/09/23 12:39:50 Ready to marshal response ...
	2024/09/23 12:39:50 Ready to write response ...
	2024/09/23 12:39:50 Ready to marshal response ...
	2024/09/23 12:39:50 Ready to write response ...
	
	
	==> kernel <==
	 12:39:54 up 11 min,  0 users,  load average: 1.00, 0.77, 0.57
	Linux addons-052630 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b706da2e61377c7ed468c79a4331b242c0011823c88614c8bc039cc285976d81] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0923 12:30:23.468662       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.53.17:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.53.17:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.53.17:443: connect: connection refused" logger="UnhandledError"
	E0923 12:30:23.485377       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.53.17:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.53.17:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.53.17:443: connect: connection refused" logger="UnhandledError"
	E0923 12:30:23.508414       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.53.17:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.53.17:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.53.17:443: connect: connection refused" logger="UnhandledError"
	I0923 12:30:23.642288       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0923 12:38:40.310945       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.127.218"}
	I0923 12:39:05.053206       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0923 12:39:06.091724       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0923 12:39:07.866473       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0923 12:39:10.766646       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0923 12:39:10.966355       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.172.184"}
	I0923 12:39:32.696168       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 12:39:32.696258       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 12:39:32.715555       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 12:39:32.715618       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 12:39:32.748060       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 12:39:32.748123       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 12:39:32.774215       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 12:39:32.775062       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 12:39:32.821384       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 12:39:32.821480       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0923 12:39:33.774424       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0923 12:39:33.821825       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0923 12:39:33.904647       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [137997c74feadea0b206e40066df0bab268bc86a43379e84dcea2cf1d5c37c85] <==
	I0923 12:39:36.575647       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-052630"
	W0923 12:39:37.326414       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:39:37.326526       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:39:37.722939       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:39:37.722992       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 12:39:37.824289       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0923 12:39:37.824389       1 shared_informer.go:320] Caches are synced for resource quota
	W0923 12:39:38.085217       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:39:38.085321       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 12:39:38.256067       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0923 12:39:38.256213       1 shared_informer.go:320] Caches are synced for garbage collector
	W0923 12:39:41.893851       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:39:41.893984       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:39:42.900978       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:39:42.901217       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:39:43.271787       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:39:43.271831       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 12:39:45.212378       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="7.911µs"
	W0923 12:39:47.436361       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:39:47.436484       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:39:51.091655       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:39:51.091703       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 12:39:52.375453       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="5.616µs"
	W0923 12:39:52.606628       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:39:52.606685       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [4f2e68fe054158153cd0c8a69f419c5737179e35fdb015065c2b0c5026242a00] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 12:29:09.744228       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 12:29:09.770791       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.225"]
	E0923 12:29:09.770866       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 12:29:09.869461       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 12:29:09.869490       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 12:29:09.869514       1 server_linux.go:169] "Using iptables Proxier"
	I0923 12:29:09.873228       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 12:29:09.873652       1 server.go:483] "Version info" version="v1.31.1"
	I0923 12:29:09.873664       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 12:29:09.875209       1 config.go:199] "Starting service config controller"
	I0923 12:29:09.875235       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 12:29:09.875268       1 config.go:105] "Starting endpoint slice config controller"
	I0923 12:29:09.875271       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 12:29:09.875715       1 config.go:328] "Starting node config controller"
	I0923 12:29:09.875721       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 12:29:09.975594       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 12:29:09.976446       1 shared_informer.go:320] Caches are synced for node config
	I0923 12:29:09.976502       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [2d98809372a261156c26bb6e7875a9195290bc295be13167b14faf4bcfd7ac5a] <==
	W0923 12:29:00.681864       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 12:29:00.681896       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:00.681942       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 12:29:00.681966       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:00.681871       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 12:29:00.682069       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:00.682524       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 12:29:00.682555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:01.521067       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 12:29:01.521115       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:01.593793       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 12:29:01.593842       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:01.675102       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 12:29:01.675475       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:01.701107       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 12:29:01.701156       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:01.718193       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 12:29:01.718242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:01.750179       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 12:29:01.750230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:01.832371       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 12:29:01.832582       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:01.940561       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 12:29:01.940868       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0923 12:29:04.675339       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 12:39:50 addons-052630 kubelet[1207]: I0923 12:39:50.826232    1207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/b0f85541-af1e-4f48-aef6-efce33d0d46e-script\") pod \"helper-pod-create-pvc-5738aee6-f638-4bad-bf82-f8a96b05fb86\" (UID: \"b0f85541-af1e-4f48-aef6-efce33d0d46e\") " pod="local-path-storage/helper-pod-create-pvc-5738aee6-f638-4bad-bf82-f8a96b05fb86"
	Sep 23 12:39:50 addons-052630 kubelet[1207]: I0923 12:39:50.826291    1207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw4tq\" (UniqueName: \"kubernetes.io/projected/b0f85541-af1e-4f48-aef6-efce33d0d46e-kube-api-access-fw4tq\") pod \"helper-pod-create-pvc-5738aee6-f638-4bad-bf82-f8a96b05fb86\" (UID: \"b0f85541-af1e-4f48-aef6-efce33d0d46e\") " pod="local-path-storage/helper-pod-create-pvc-5738aee6-f638-4bad-bf82-f8a96b05fb86"
	Sep 23 12:39:51 addons-052630 kubelet[1207]: I0923 12:39:51.940636    1207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4640d1f-da53-4886-bf0e-3ed0ff21afe3-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "c4640d1f-da53-4886-bf0e-3ed0ff21afe3" (UID: "c4640d1f-da53-4886-bf0e-3ed0ff21afe3"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 23 12:39:51 addons-052630 kubelet[1207]: I0923 12:39:51.940719    1207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c4640d1f-da53-4886-bf0e-3ed0ff21afe3-gcp-creds\") pod \"c4640d1f-da53-4886-bf0e-3ed0ff21afe3\" (UID: \"c4640d1f-da53-4886-bf0e-3ed0ff21afe3\") "
	Sep 23 12:39:51 addons-052630 kubelet[1207]: I0923 12:39:51.940757    1207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h7dm\" (UniqueName: \"kubernetes.io/projected/c4640d1f-da53-4886-bf0e-3ed0ff21afe3-kube-api-access-9h7dm\") pod \"c4640d1f-da53-4886-bf0e-3ed0ff21afe3\" (UID: \"c4640d1f-da53-4886-bf0e-3ed0ff21afe3\") "
	Sep 23 12:39:51 addons-052630 kubelet[1207]: I0923 12:39:51.947829    1207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4640d1f-da53-4886-bf0e-3ed0ff21afe3-kube-api-access-9h7dm" (OuterVolumeSpecName: "kube-api-access-9h7dm") pod "c4640d1f-da53-4886-bf0e-3ed0ff21afe3" (UID: "c4640d1f-da53-4886-bf0e-3ed0ff21afe3"). InnerVolumeSpecName "kube-api-access-9h7dm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 12:39:52 addons-052630 kubelet[1207]: I0923 12:39:52.041560    1207 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c4640d1f-da53-4886-bf0e-3ed0ff21afe3-gcp-creds\") on node \"addons-052630\" DevicePath \"\""
	Sep 23 12:39:52 addons-052630 kubelet[1207]: I0923 12:39:52.041587    1207 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9h7dm\" (UniqueName: \"kubernetes.io/projected/c4640d1f-da53-4886-bf0e-3ed0ff21afe3-kube-api-access-9h7dm\") on node \"addons-052630\" DevicePath \"\""
	Sep 23 12:39:52 addons-052630 kubelet[1207]: I0923 12:39:52.746452    1207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrbd8\" (UniqueName: \"kubernetes.io/projected/ca56f86a-1049-47d9-b11b-9f492f1f0e5a-kube-api-access-jrbd8\") pod \"ca56f86a-1049-47d9-b11b-9f492f1f0e5a\" (UID: \"ca56f86a-1049-47d9-b11b-9f492f1f0e5a\") "
	Sep 23 12:39:52 addons-052630 kubelet[1207]: I0923 12:39:52.755882    1207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca56f86a-1049-47d9-b11b-9f492f1f0e5a-kube-api-access-jrbd8" (OuterVolumeSpecName: "kube-api-access-jrbd8") pod "ca56f86a-1049-47d9-b11b-9f492f1f0e5a" (UID: "ca56f86a-1049-47d9-b11b-9f492f1f0e5a"). InnerVolumeSpecName "kube-api-access-jrbd8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 12:39:52 addons-052630 kubelet[1207]: I0923 12:39:52.847345    1207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvd64\" (UniqueName: \"kubernetes.io/projected/cf74bb33-75e5-4844-a3a8-fc698241ea5c-kube-api-access-mvd64\") pod \"cf74bb33-75e5-4844-a3a8-fc698241ea5c\" (UID: \"cf74bb33-75e5-4844-a3a8-fc698241ea5c\") "
	Sep 23 12:39:52 addons-052630 kubelet[1207]: I0923 12:39:52.847426    1207 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jrbd8\" (UniqueName: \"kubernetes.io/projected/ca56f86a-1049-47d9-b11b-9f492f1f0e5a-kube-api-access-jrbd8\") on node \"addons-052630\" DevicePath \"\""
	Sep 23 12:39:52 addons-052630 kubelet[1207]: I0923 12:39:52.850863    1207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf74bb33-75e5-4844-a3a8-fc698241ea5c-kube-api-access-mvd64" (OuterVolumeSpecName: "kube-api-access-mvd64") pod "cf74bb33-75e5-4844-a3a8-fc698241ea5c" (UID: "cf74bb33-75e5-4844-a3a8-fc698241ea5c"). InnerVolumeSpecName "kube-api-access-mvd64". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 12:39:52 addons-052630 kubelet[1207]: I0923 12:39:52.948759    1207 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mvd64\" (UniqueName: \"kubernetes.io/projected/cf74bb33-75e5-4844-a3a8-fc698241ea5c-kube-api-access-mvd64\") on node \"addons-052630\" DevicePath \"\""
	Sep 23 12:39:53 addons-052630 kubelet[1207]: I0923 12:39:53.126690    1207 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4640d1f-da53-4886-bf0e-3ed0ff21afe3" path="/var/lib/kubelet/pods/c4640d1f-da53-4886-bf0e-3ed0ff21afe3/volumes"
	Sep 23 12:39:53 addons-052630 kubelet[1207]: I0923 12:39:53.308229    1207 scope.go:117] "RemoveContainer" containerID="31ae92364a6f3d8cf2574e0a012807ae7cd90816d053680493f0a0acd913c282"
	Sep 23 12:39:53 addons-052630 kubelet[1207]: I0923 12:39:53.381062    1207 scope.go:117] "RemoveContainer" containerID="31ae92364a6f3d8cf2574e0a012807ae7cd90816d053680493f0a0acd913c282"
	Sep 23 12:39:53 addons-052630 kubelet[1207]: E0923 12:39:53.381873    1207 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31ae92364a6f3d8cf2574e0a012807ae7cd90816d053680493f0a0acd913c282\": container with ID starting with 31ae92364a6f3d8cf2574e0a012807ae7cd90816d053680493f0a0acd913c282 not found: ID does not exist" containerID="31ae92364a6f3d8cf2574e0a012807ae7cd90816d053680493f0a0acd913c282"
	Sep 23 12:39:53 addons-052630 kubelet[1207]: I0923 12:39:53.381917    1207 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31ae92364a6f3d8cf2574e0a012807ae7cd90816d053680493f0a0acd913c282"} err="failed to get container status \"31ae92364a6f3d8cf2574e0a012807ae7cd90816d053680493f0a0acd913c282\": rpc error: code = NotFound desc = could not find container \"31ae92364a6f3d8cf2574e0a012807ae7cd90816d053680493f0a0acd913c282\": container with ID starting with 31ae92364a6f3d8cf2574e0a012807ae7cd90816d053680493f0a0acd913c282 not found: ID does not exist"
	Sep 23 12:39:53 addons-052630 kubelet[1207]: I0923 12:39:53.381940    1207 scope.go:117] "RemoveContainer" containerID="411c792f926769e85b78ef6a67704f5d3ae973903dfe99e66a72eaba2cec862a"
	Sep 23 12:39:53 addons-052630 kubelet[1207]: I0923 12:39:53.402359    1207 scope.go:117] "RemoveContainer" containerID="411c792f926769e85b78ef6a67704f5d3ae973903dfe99e66a72eaba2cec862a"
	Sep 23 12:39:53 addons-052630 kubelet[1207]: E0923 12:39:53.402903    1207 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"411c792f926769e85b78ef6a67704f5d3ae973903dfe99e66a72eaba2cec862a\": container with ID starting with 411c792f926769e85b78ef6a67704f5d3ae973903dfe99e66a72eaba2cec862a not found: ID does not exist" containerID="411c792f926769e85b78ef6a67704f5d3ae973903dfe99e66a72eaba2cec862a"
	Sep 23 12:39:53 addons-052630 kubelet[1207]: I0923 12:39:53.402935    1207 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"411c792f926769e85b78ef6a67704f5d3ae973903dfe99e66a72eaba2cec862a"} err="failed to get container status \"411c792f926769e85b78ef6a67704f5d3ae973903dfe99e66a72eaba2cec862a\": rpc error: code = NotFound desc = could not find container \"411c792f926769e85b78ef6a67704f5d3ae973903dfe99e66a72eaba2cec862a\": container with ID starting with 411c792f926769e85b78ef6a67704f5d3ae973903dfe99e66a72eaba2cec862a not found: ID does not exist"
	Sep 23 12:39:53 addons-052630 kubelet[1207]: E0923 12:39:53.485241    1207 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727095193484823628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:522690,},InodesUsed:&UInt64Value{Value:177,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 12:39:53 addons-052630 kubelet[1207]: E0923 12:39:53.485269    1207 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727095193484823628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:522690,},InodesUsed:&UInt64Value{Value:177,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [58bbd55bde08fee5d7aeb446829fa511ea633c6594a8f94dbc19f40954380b59] <==
	I0923 12:29:15.418528       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 12:29:15.469448       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 12:29:15.469505       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 12:29:15.499374       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 12:29:15.512080       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-052630_666822b7-806c-46b8-b021-ef12b62fd031!
	I0923 12:29:15.512828       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"16ab68d2-163f-4497-86c2-19800b48c856", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-052630_666822b7-806c-46b8-b021-ef12b62fd031 became leader
	I0923 12:29:15.856800       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-052630_666822b7-806c-46b8-b021-ef12b62fd031!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-052630 -n addons-052630
helpers_test.go:261: (dbg) Run:  kubectl --context addons-052630 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox test-local-path ingress-nginx-admission-create-d2m8p ingress-nginx-admission-patch-rt72w helper-pod-create-pvc-5738aee6-f638-4bad-bf82-f8a96b05fb86
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-052630 describe pod busybox test-local-path ingress-nginx-admission-create-d2m8p ingress-nginx-admission-patch-rt72w helper-pod-create-pvc-5738aee6-f638-4bad-bf82-f8a96b05fb86
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-052630 describe pod busybox test-local-path ingress-nginx-admission-create-d2m8p ingress-nginx-admission-patch-rt72w helper-pod-create-pvc-5738aee6-f638-4bad-bf82-f8a96b05fb86: exit status 1 (84.002216ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-052630/192.168.39.225
	Start Time:       Mon, 23 Sep 2024 12:30:36 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hx7h2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hx7h2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m18s                  default-scheduler  Successfully assigned default/busybox to addons-052630
	  Normal   Pulling    7m56s (x4 over 9m17s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m56s (x4 over 9m17s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m56s (x4 over 9m17s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m29s (x6 over 9m16s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m4s (x21 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j5nvn (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-j5nvn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-d2m8p" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-rt72w" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-5738aee6-f638-4bad-bf82-f8a96b05fb86" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-052630 describe pod busybox test-local-path ingress-nginx-admission-create-d2m8p ingress-nginx-admission-patch-rt72w helper-pod-create-pvc-5738aee6-f638-4bad-bf82-f8a96b05fb86: exit status 1
--- FAIL: TestAddons/parallel/Registry (75.48s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (154.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-052630 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-052630 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-052630 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [487480e4-f024-4e3c-9c18-a9aabd6129fb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [487480e4-f024-4e3c-9c18-a9aabd6129fb] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.00499673s
I0923 12:39:21.019612  669447 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-052630 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:260: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-052630 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.587595656s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:276: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:284: (dbg) Run:  kubectl --context addons-052630 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-052630 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.39.225
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-052630 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p addons-052630 addons disable ingress-dns --alsologtostderr -v=1: (1.753678096s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-052630 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-052630 addons disable ingress --alsologtostderr -v=1: (7.761229127s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-052630 -n addons-052630
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-052630 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-052630 logs -n 25: (1.264581908s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC | 23 Sep 24 12:28 UTC |
	| delete  | -p download-only-473947                                                                     | download-only-473947 | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC | 23 Sep 24 12:28 UTC |
	| delete  | -p download-only-832165                                                                     | download-only-832165 | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC | 23 Sep 24 12:28 UTC |
	| delete  | -p download-only-473947                                                                     | download-only-473947 | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC | 23 Sep 24 12:28 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-529103 | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC |                     |
	|         | binary-mirror-529103                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:35373                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-529103                                                                     | binary-mirror-529103 | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC | 23 Sep 24 12:28 UTC |
	| addons  | disable dashboard -p                                                                        | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC |                     |
	|         | addons-052630                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC |                     |
	|         | addons-052630                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-052630 --wait=true                                                                | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC | 23 Sep 24 12:30 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:38 UTC | 23 Sep 24 12:38 UTC |
	|         | -p addons-052630                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-052630 addons disable                                                                | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:38 UTC | 23 Sep 24 12:38 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC | 23 Sep 24 12:39 UTC |
	|         | addons-052630                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-052630 ssh curl -s                                                                   | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-052630 addons                                                                        | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC | 23 Sep 24 12:39 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-052630 addons                                                                        | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC | 23 Sep 24 12:39 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC | 23 Sep 24 12:39 UTC |
	|         | -p addons-052630                                                                            |                      |         |         |                     |                     |
	| addons  | addons-052630 addons disable                                                                | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC | 23 Sep 24 12:39 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ip      | addons-052630 ip                                                                            | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC | 23 Sep 24 12:39 UTC |
	| addons  | addons-052630 addons disable                                                                | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC | 23 Sep 24 12:39 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:40 UTC | 23 Sep 24 12:40 UTC |
	|         | addons-052630                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-052630 ssh cat                                                                       | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:40 UTC | 23 Sep 24 12:40 UTC |
	|         | /opt/local-path-provisioner/pvc-5738aee6-f638-4bad-bf82-f8a96b05fb86_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-052630 addons disable                                                                | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:40 UTC | 23 Sep 24 12:40 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-052630 ip                                                                            | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:41 UTC | 23 Sep 24 12:41 UTC |
	| addons  | addons-052630 addons disable                                                                | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:41 UTC | 23 Sep 24 12:41 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-052630 addons disable                                                                | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:41 UTC | 23 Sep 24 12:41 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 12:28:24
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 12:28:24.813371  670144 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:28:24.813646  670144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:28:24.813655  670144 out.go:358] Setting ErrFile to fd 2...
	I0923 12:28:24.813660  670144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:28:24.813860  670144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-662205/.minikube/bin
	I0923 12:28:24.814564  670144 out.go:352] Setting JSON to false
	I0923 12:28:24.815641  670144 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7848,"bootTime":1727086657,"procs":321,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 12:28:24.815741  670144 start.go:139] virtualization: kvm guest
	I0923 12:28:24.818077  670144 out.go:177] * [addons-052630] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 12:28:24.819427  670144 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 12:28:24.819496  670144 notify.go:220] Checking for updates...
	I0923 12:28:24.821743  670144 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:28:24.823109  670144 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 12:28:24.824398  670144 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:28:24.825560  670144 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 12:28:24.826608  670144 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 12:28:24.827862  670144 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:28:24.861163  670144 out.go:177] * Using the kvm2 driver based on user configuration
	I0923 12:28:24.862619  670144 start.go:297] selected driver: kvm2
	I0923 12:28:24.862645  670144 start.go:901] validating driver "kvm2" against <nil>
	I0923 12:28:24.862661  670144 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 12:28:24.863497  670144 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:28:24.863608  670144 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19690-662205/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 12:28:24.879912  670144 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 12:28:24.879978  670144 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 12:28:24.880260  670144 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:28:24.880303  670144 cni.go:84] Creating CNI manager for ""
	I0923 12:28:24.880362  670144 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 12:28:24.880373  670144 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 12:28:24.880464  670144 start.go:340] cluster config:
	{Name:addons-052630 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-052630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:28:24.880601  670144 iso.go:125] acquiring lock: {Name:mkb968a95eae3838cd5c328cf3385c2ef4ff2c8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:28:24.882416  670144 out.go:177] * Starting "addons-052630" primary control-plane node in "addons-052630" cluster
	I0923 12:28:24.883605  670144 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 12:28:24.883654  670144 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 12:28:24.883668  670144 cache.go:56] Caching tarball of preloaded images
	I0923 12:28:24.883756  670144 preload.go:172] Found /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 12:28:24.883772  670144 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 12:28:24.884127  670144 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/config.json ...
	I0923 12:28:24.884158  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/config.json: {Name:mk8f8b007c3bc269ac83b2216416a2c7aa34749b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:24.884352  670144 start.go:360] acquireMachinesLock for addons-052630: {Name:mka98570d4b4becad22300323f1f88e64743eec3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 12:28:24.884434  670144 start.go:364] duration metric: took 46.812µs to acquireMachinesLock for "addons-052630"
	I0923 12:28:24.884466  670144 start.go:93] Provisioning new machine with config: &{Name:addons-052630 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-052630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:28:24.884576  670144 start.go:125] createHost starting for "" (driver="kvm2")
	I0923 12:28:24.886275  670144 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0923 12:28:24.886477  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:28:24.886532  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:28:24.901608  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35689
	I0923 12:28:24.902121  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:28:24.902783  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:28:24.902809  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:28:24.903341  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:28:24.903572  670144 main.go:141] libmachine: (addons-052630) Calling .GetMachineName
	I0923 12:28:24.903730  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:28:24.903901  670144 start.go:159] libmachine.API.Create for "addons-052630" (driver="kvm2")
	I0923 12:28:24.903933  670144 client.go:168] LocalClient.Create starting
	I0923 12:28:24.903984  670144 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem
	I0923 12:28:24.971472  670144 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem
	I0923 12:28:25.199996  670144 main.go:141] libmachine: Running pre-create checks...
	I0923 12:28:25.200025  670144 main.go:141] libmachine: (addons-052630) Calling .PreCreateCheck
	I0923 12:28:25.200603  670144 main.go:141] libmachine: (addons-052630) Calling .GetConfigRaw
	I0923 12:28:25.201064  670144 main.go:141] libmachine: Creating machine...
	I0923 12:28:25.201081  670144 main.go:141] libmachine: (addons-052630) Calling .Create
	I0923 12:28:25.201318  670144 main.go:141] libmachine: (addons-052630) Creating KVM machine...
	I0923 12:28:25.202978  670144 main.go:141] libmachine: (addons-052630) DBG | found existing default KVM network
	I0923 12:28:25.203985  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:25.203807  670166 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231f0}
	I0923 12:28:25.204034  670144 main.go:141] libmachine: (addons-052630) DBG | created network xml: 
	I0923 12:28:25.204055  670144 main.go:141] libmachine: (addons-052630) DBG | <network>
	I0923 12:28:25.204063  670144 main.go:141] libmachine: (addons-052630) DBG |   <name>mk-addons-052630</name>
	I0923 12:28:25.204070  670144 main.go:141] libmachine: (addons-052630) DBG |   <dns enable='no'/>
	I0923 12:28:25.204076  670144 main.go:141] libmachine: (addons-052630) DBG |   
	I0923 12:28:25.204082  670144 main.go:141] libmachine: (addons-052630) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0923 12:28:25.204088  670144 main.go:141] libmachine: (addons-052630) DBG |     <dhcp>
	I0923 12:28:25.204093  670144 main.go:141] libmachine: (addons-052630) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0923 12:28:25.204101  670144 main.go:141] libmachine: (addons-052630) DBG |     </dhcp>
	I0923 12:28:25.204105  670144 main.go:141] libmachine: (addons-052630) DBG |   </ip>
	I0923 12:28:25.204112  670144 main.go:141] libmachine: (addons-052630) DBG |   
	I0923 12:28:25.204119  670144 main.go:141] libmachine: (addons-052630) DBG | </network>
	I0923 12:28:25.204129  670144 main.go:141] libmachine: (addons-052630) DBG | 
	I0923 12:28:25.209600  670144 main.go:141] libmachine: (addons-052630) DBG | trying to create private KVM network mk-addons-052630 192.168.39.0/24...
	I0923 12:28:25.278429  670144 main.go:141] libmachine: (addons-052630) Setting up store path in /home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630 ...
	I0923 12:28:25.278462  670144 main.go:141] libmachine: (addons-052630) DBG | private KVM network mk-addons-052630 192.168.39.0/24 created
	I0923 12:28:25.278471  670144 main.go:141] libmachine: (addons-052630) Building disk image from file:///home/jenkins/minikube-integration/19690-662205/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 12:28:25.278507  670144 main.go:141] libmachine: (addons-052630) Downloading /home/jenkins/minikube-integration/19690-662205/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19690-662205/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 12:28:25.278523  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:25.278366  670166 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:28:25.561478  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:25.561306  670166 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa...
	I0923 12:28:25.781646  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:25.781463  670166 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/addons-052630.rawdisk...
	I0923 12:28:25.781686  670144 main.go:141] libmachine: (addons-052630) DBG | Writing magic tar header
	I0923 12:28:25.781699  670144 main.go:141] libmachine: (addons-052630) DBG | Writing SSH key tar header
	I0923 12:28:25.781710  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:25.781618  670166 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630 ...
	I0923 12:28:25.781843  670144 main.go:141] libmachine: (addons-052630) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630
	I0923 12:28:25.781876  670144 main.go:141] libmachine: (addons-052630) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630 (perms=drwx------)
	I0923 12:28:25.781893  670144 main.go:141] libmachine: (addons-052630) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube/machines
	I0923 12:28:25.781906  670144 main.go:141] libmachine: (addons-052630) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube/machines (perms=drwxr-xr-x)
	I0923 12:28:25.781926  670144 main.go:141] libmachine: (addons-052630) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:28:25.781942  670144 main.go:141] libmachine: (addons-052630) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube (perms=drwxr-xr-x)
	I0923 12:28:25.781979  670144 main.go:141] libmachine: (addons-052630) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205 (perms=drwxrwxr-x)
	I0923 12:28:25.781995  670144 main.go:141] libmachine: (addons-052630) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205
	I0923 12:28:25.782008  670144 main.go:141] libmachine: (addons-052630) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 12:28:25.782019  670144 main.go:141] libmachine: (addons-052630) DBG | Checking permissions on dir: /home/jenkins
	I0923 12:28:25.782030  670144 main.go:141] libmachine: (addons-052630) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 12:28:25.782042  670144 main.go:141] libmachine: (addons-052630) DBG | Checking permissions on dir: /home
	I0923 12:28:25.782054  670144 main.go:141] libmachine: (addons-052630) DBG | Skipping /home - not owner
	I0923 12:28:25.782073  670144 main.go:141] libmachine: (addons-052630) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 12:28:25.782083  670144 main.go:141] libmachine: (addons-052630) Creating domain...
	I0923 12:28:25.783344  670144 main.go:141] libmachine: (addons-052630) define libvirt domain using xml: 
	I0923 12:28:25.783364  670144 main.go:141] libmachine: (addons-052630) <domain type='kvm'>
	I0923 12:28:25.783372  670144 main.go:141] libmachine: (addons-052630)   <name>addons-052630</name>
	I0923 12:28:25.783376  670144 main.go:141] libmachine: (addons-052630)   <memory unit='MiB'>4000</memory>
	I0923 12:28:25.783381  670144 main.go:141] libmachine: (addons-052630)   <vcpu>2</vcpu>
	I0923 12:28:25.783385  670144 main.go:141] libmachine: (addons-052630)   <features>
	I0923 12:28:25.783390  670144 main.go:141] libmachine: (addons-052630)     <acpi/>
	I0923 12:28:25.783396  670144 main.go:141] libmachine: (addons-052630)     <apic/>
	I0923 12:28:25.783403  670144 main.go:141] libmachine: (addons-052630)     <pae/>
	I0923 12:28:25.783409  670144 main.go:141] libmachine: (addons-052630)     
	I0923 12:28:25.783417  670144 main.go:141] libmachine: (addons-052630)   </features>
	I0923 12:28:25.783427  670144 main.go:141] libmachine: (addons-052630)   <cpu mode='host-passthrough'>
	I0923 12:28:25.783435  670144 main.go:141] libmachine: (addons-052630)   
	I0923 12:28:25.783446  670144 main.go:141] libmachine: (addons-052630)   </cpu>
	I0923 12:28:25.783453  670144 main.go:141] libmachine: (addons-052630)   <os>
	I0923 12:28:25.783463  670144 main.go:141] libmachine: (addons-052630)     <type>hvm</type>
	I0923 12:28:25.783477  670144 main.go:141] libmachine: (addons-052630)     <boot dev='cdrom'/>
	I0923 12:28:25.783486  670144 main.go:141] libmachine: (addons-052630)     <boot dev='hd'/>
	I0923 12:28:25.783493  670144 main.go:141] libmachine: (addons-052630)     <bootmenu enable='no'/>
	I0923 12:28:25.783502  670144 main.go:141] libmachine: (addons-052630)   </os>
	I0923 12:28:25.783511  670144 main.go:141] libmachine: (addons-052630)   <devices>
	I0923 12:28:25.783529  670144 main.go:141] libmachine: (addons-052630)     <disk type='file' device='cdrom'>
	I0923 12:28:25.783552  670144 main.go:141] libmachine: (addons-052630)       <source file='/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/boot2docker.iso'/>
	I0923 12:28:25.783577  670144 main.go:141] libmachine: (addons-052630)       <target dev='hdc' bus='scsi'/>
	I0923 12:28:25.783588  670144 main.go:141] libmachine: (addons-052630)       <readonly/>
	I0923 12:28:25.783595  670144 main.go:141] libmachine: (addons-052630)     </disk>
	I0923 12:28:25.783607  670144 main.go:141] libmachine: (addons-052630)     <disk type='file' device='disk'>
	I0923 12:28:25.783618  670144 main.go:141] libmachine: (addons-052630)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 12:28:25.783633  670144 main.go:141] libmachine: (addons-052630)       <source file='/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/addons-052630.rawdisk'/>
	I0923 12:28:25.783643  670144 main.go:141] libmachine: (addons-052630)       <target dev='hda' bus='virtio'/>
	I0923 12:28:25.783719  670144 main.go:141] libmachine: (addons-052630)     </disk>
	I0923 12:28:25.783743  670144 main.go:141] libmachine: (addons-052630)     <interface type='network'>
	I0923 12:28:25.783752  670144 main.go:141] libmachine: (addons-052630)       <source network='mk-addons-052630'/>
	I0923 12:28:25.783766  670144 main.go:141] libmachine: (addons-052630)       <model type='virtio'/>
	I0923 12:28:25.783776  670144 main.go:141] libmachine: (addons-052630)     </interface>
	I0923 12:28:25.783789  670144 main.go:141] libmachine: (addons-052630)     <interface type='network'>
	I0923 12:28:25.783807  670144 main.go:141] libmachine: (addons-052630)       <source network='default'/>
	I0923 12:28:25.783821  670144 main.go:141] libmachine: (addons-052630)       <model type='virtio'/>
	I0923 12:28:25.783832  670144 main.go:141] libmachine: (addons-052630)     </interface>
	I0923 12:28:25.783845  670144 main.go:141] libmachine: (addons-052630)     <serial type='pty'>
	I0923 12:28:25.783856  670144 main.go:141] libmachine: (addons-052630)       <target port='0'/>
	I0923 12:28:25.783866  670144 main.go:141] libmachine: (addons-052630)     </serial>
	I0923 12:28:25.783878  670144 main.go:141] libmachine: (addons-052630)     <console type='pty'>
	I0923 12:28:25.783909  670144 main.go:141] libmachine: (addons-052630)       <target type='serial' port='0'/>
	I0923 12:28:25.783928  670144 main.go:141] libmachine: (addons-052630)     </console>
	I0923 12:28:25.783942  670144 main.go:141] libmachine: (addons-052630)     <rng model='virtio'>
	I0923 12:28:25.783955  670144 main.go:141] libmachine: (addons-052630)       <backend model='random'>/dev/random</backend>
	I0923 12:28:25.783971  670144 main.go:141] libmachine: (addons-052630)     </rng>
	I0923 12:28:25.783993  670144 main.go:141] libmachine: (addons-052630)     
	I0923 12:28:25.784002  670144 main.go:141] libmachine: (addons-052630)     
	I0923 12:28:25.784006  670144 main.go:141] libmachine: (addons-052630)   </devices>
	I0923 12:28:25.784016  670144 main.go:141] libmachine: (addons-052630) </domain>
	I0923 12:28:25.784025  670144 main.go:141] libmachine: (addons-052630) 
	I0923 12:28:25.788537  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:fa:ec:fb in network default
	I0923 12:28:25.789254  670144 main.go:141] libmachine: (addons-052630) Ensuring networks are active...
	I0923 12:28:25.789279  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:25.790127  670144 main.go:141] libmachine: (addons-052630) Ensuring network default is active
	I0923 12:28:25.790514  670144 main.go:141] libmachine: (addons-052630) Ensuring network mk-addons-052630 is active
	I0923 12:28:25.791168  670144 main.go:141] libmachine: (addons-052630) Getting domain xml...
	I0923 12:28:25.792095  670144 main.go:141] libmachine: (addons-052630) Creating domain...
	I0923 12:28:27.038227  670144 main.go:141] libmachine: (addons-052630) Waiting to get IP...
	I0923 12:28:27.038933  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:27.039372  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:27.039471  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:27.039378  670166 retry.go:31] will retry after 209.573222ms: waiting for machine to come up
	I0923 12:28:27.250785  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:27.251320  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:27.251357  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:27.251238  670166 retry.go:31] will retry after 325.370385ms: waiting for machine to come up
	I0923 12:28:27.577921  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:27.578545  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:27.578574  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:27.578492  670166 retry.go:31] will retry after 474.794229ms: waiting for machine to come up
	I0923 12:28:28.055184  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:28.055670  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:28.055696  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:28.055630  670166 retry.go:31] will retry after 474.62618ms: waiting for machine to come up
	I0923 12:28:28.532060  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:28.532544  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:28.532570  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:28.532497  670166 retry.go:31] will retry after 466.59648ms: waiting for machine to come up
	I0923 12:28:29.001527  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:29.002034  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:29.002061  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:29.001954  670166 retry.go:31] will retry after 665.819727ms: waiting for machine to come up
	I0923 12:28:29.670150  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:29.670557  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:29.670586  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:29.670496  670166 retry.go:31] will retry after 826.725256ms: waiting for machine to come up
	I0923 12:28:30.499346  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:30.499773  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:30.499804  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:30.499717  670166 retry.go:31] will retry after 1.111672977s: waiting for machine to come up
	I0923 12:28:31.612864  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:31.613371  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:31.613397  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:31.613333  670166 retry.go:31] will retry after 1.267221609s: waiting for machine to come up
	I0923 12:28:32.882782  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:32.883202  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:32.883225  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:32.883150  670166 retry.go:31] will retry after 2.15228845s: waiting for machine to come up
	I0923 12:28:35.036699  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:35.037202  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:35.037238  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:35.037140  670166 retry.go:31] will retry after 2.618330832s: waiting for machine to come up
	I0923 12:28:37.659044  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:37.659708  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:37.659740  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:37.659658  670166 retry.go:31] will retry after 3.182891363s: waiting for machine to come up
	I0923 12:28:40.843714  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:40.844042  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:40.844066  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:40.843990  670166 retry.go:31] will retry after 4.470723393s: waiting for machine to come up
	I0923 12:28:45.316645  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.317132  670144 main.go:141] libmachine: (addons-052630) Found IP for machine: 192.168.39.225
	I0923 12:28:45.317158  670144 main.go:141] libmachine: (addons-052630) Reserving static IP address...
	I0923 12:28:45.317201  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has current primary IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.317585  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find host DHCP lease matching {name: "addons-052630", mac: "52:54:00:6d:fc:98", ip: "192.168.39.225"} in network mk-addons-052630
	I0923 12:28:45.396974  670144 main.go:141] libmachine: (addons-052630) Reserved static IP address: 192.168.39.225
	I0923 12:28:45.397017  670144 main.go:141] libmachine: (addons-052630) Waiting for SSH to be available...
	I0923 12:28:45.397030  670144 main.go:141] libmachine: (addons-052630) DBG | Getting to WaitForSSH function...
	I0923 12:28:45.399773  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.400242  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:45.400280  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.400442  670144 main.go:141] libmachine: (addons-052630) DBG | Using SSH client type: external
	I0923 12:28:45.400468  670144 main.go:141] libmachine: (addons-052630) DBG | Using SSH private key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa (-rw-------)
	I0923 12:28:45.400508  670144 main.go:141] libmachine: (addons-052630) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 12:28:45.400526  670144 main.go:141] libmachine: (addons-052630) DBG | About to run SSH command:
	I0923 12:28:45.400541  670144 main.go:141] libmachine: (addons-052630) DBG | exit 0
	I0923 12:28:45.526239  670144 main.go:141] libmachine: (addons-052630) DBG | SSH cmd err, output: <nil>: 
	I0923 12:28:45.526548  670144 main.go:141] libmachine: (addons-052630) KVM machine creation complete!
	I0923 12:28:45.526929  670144 main.go:141] libmachine: (addons-052630) Calling .GetConfigRaw
	I0923 12:28:45.527556  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:28:45.527717  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:28:45.527840  670144 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 12:28:45.527856  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:28:45.529429  670144 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 12:28:45.529452  670144 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 12:28:45.529459  670144 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 12:28:45.529467  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:45.531511  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.531931  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:45.531976  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.532096  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:45.532276  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:45.532439  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:45.532595  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:45.532719  670144 main.go:141] libmachine: Using SSH client type: native
	I0923 12:28:45.532912  670144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0923 12:28:45.532928  670144 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 12:28:45.641401  670144 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:28:45.641429  670144 main.go:141] libmachine: Detecting the provisioner...
	I0923 12:28:45.641436  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:45.644203  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.644585  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:45.644605  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.644794  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:45.645002  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:45.645132  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:45.645234  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:45.645389  670144 main.go:141] libmachine: Using SSH client type: native
	I0923 12:28:45.645579  670144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0923 12:28:45.645589  670144 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 12:28:45.754409  670144 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 12:28:45.754564  670144 main.go:141] libmachine: found compatible host: buildroot
	I0923 12:28:45.754586  670144 main.go:141] libmachine: Provisioning with buildroot...
	I0923 12:28:45.754597  670144 main.go:141] libmachine: (addons-052630) Calling .GetMachineName
	I0923 12:28:45.754895  670144 buildroot.go:166] provisioning hostname "addons-052630"
	I0923 12:28:45.754923  670144 main.go:141] libmachine: (addons-052630) Calling .GetMachineName
	I0923 12:28:45.755128  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:45.758313  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.758762  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:45.758793  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.758946  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:45.759146  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:45.759329  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:45.759482  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:45.759643  670144 main.go:141] libmachine: Using SSH client type: native
	I0923 12:28:45.759825  670144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0923 12:28:45.759836  670144 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-052630 && echo "addons-052630" | sudo tee /etc/hostname
	I0923 12:28:45.884101  670144 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-052630
	
	I0923 12:28:45.884147  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:45.886809  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.887156  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:45.887190  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.887396  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:45.887621  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:45.887844  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:45.887995  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:45.888203  670144 main.go:141] libmachine: Using SSH client type: native
	I0923 12:28:45.888386  670144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0923 12:28:45.888401  670144 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-052630' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-052630/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-052630' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:28:46.010925  670144 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:28:46.010962  670144 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19690-662205/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-662205/.minikube}
	I0923 12:28:46.011014  670144 buildroot.go:174] setting up certificates
	I0923 12:28:46.011029  670144 provision.go:84] configureAuth start
	I0923 12:28:46.011047  670144 main.go:141] libmachine: (addons-052630) Calling .GetMachineName
	I0923 12:28:46.011410  670144 main.go:141] libmachine: (addons-052630) Calling .GetIP
	I0923 12:28:46.014459  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.014799  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.014825  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.014976  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:46.017411  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.017737  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.017810  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.017885  670144 provision.go:143] copyHostCerts
	I0923 12:28:46.017961  670144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem (1082 bytes)
	I0923 12:28:46.018127  670144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem (1123 bytes)
	I0923 12:28:46.018208  670144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem (1675 bytes)
	I0923 12:28:46.018272  670144 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem org=jenkins.addons-052630 san=[127.0.0.1 192.168.39.225 addons-052630 localhost minikube]
	I0923 12:28:46.112323  670144 provision.go:177] copyRemoteCerts
	I0923 12:28:46.112412  670144 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:28:46.112450  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:46.115251  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.115655  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.115682  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.115895  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:46.116119  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:46.116317  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:46.116487  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:28:46.199745  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 12:28:46.222501  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 12:28:46.245931  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 12:28:46.268307  670144 provision.go:87] duration metric: took 257.259613ms to configureAuth
	I0923 12:28:46.268338  670144 buildroot.go:189] setting minikube options for container-runtime
	I0923 12:28:46.268561  670144 config.go:182] Loaded profile config "addons-052630": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:28:46.268643  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:46.271831  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.272263  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.272294  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.272469  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:46.272699  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:46.272868  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:46.273026  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:46.273169  670144 main.go:141] libmachine: Using SSH client type: native
	I0923 12:28:46.273365  670144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0923 12:28:46.273385  670144 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 12:28:46.493088  670144 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 12:28:46.493128  670144 main.go:141] libmachine: Checking connection to Docker...
	I0923 12:28:46.493136  670144 main.go:141] libmachine: (addons-052630) Calling .GetURL
	I0923 12:28:46.494629  670144 main.go:141] libmachine: (addons-052630) DBG | Using libvirt version 6000000
	I0923 12:28:46.496809  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.497168  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.497204  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.497405  670144 main.go:141] libmachine: Docker is up and running!
	I0923 12:28:46.497422  670144 main.go:141] libmachine: Reticulating splines...
	I0923 12:28:46.497430  670144 client.go:171] duration metric: took 21.593485371s to LocalClient.Create
	I0923 12:28:46.497459  670144 start.go:167] duration metric: took 21.593561276s to libmachine.API.Create "addons-052630"
	I0923 12:28:46.497469  670144 start.go:293] postStartSetup for "addons-052630" (driver="kvm2")
	I0923 12:28:46.497479  670144 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:28:46.497499  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:28:46.497777  670144 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:28:46.497812  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:46.501032  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.501490  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.501519  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.501865  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:46.502081  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:46.502366  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:46.502522  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:28:46.587938  670144 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:28:46.592031  670144 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 12:28:46.592074  670144 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/addons for local assets ...
	I0923 12:28:46.592166  670144 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/files for local assets ...
	I0923 12:28:46.592204  670144 start.go:296] duration metric: took 94.729785ms for postStartSetup
	I0923 12:28:46.592263  670144 main.go:141] libmachine: (addons-052630) Calling .GetConfigRaw
	I0923 12:28:46.592996  670144 main.go:141] libmachine: (addons-052630) Calling .GetIP
	I0923 12:28:46.595992  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.596372  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.596398  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.596737  670144 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/config.json ...
	I0923 12:28:46.596934  670144 start.go:128] duration metric: took 21.712346872s to createHost
	I0923 12:28:46.596958  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:46.599418  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.599733  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.599767  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.599907  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:46.600079  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:46.600203  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:46.600310  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:46.600443  670144 main.go:141] libmachine: Using SSH client type: native
	I0923 12:28:46.600620  670144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0923 12:28:46.600630  670144 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 12:28:46.710677  670144 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727094526.683192770
	
	I0923 12:28:46.710703  670144 fix.go:216] guest clock: 1727094526.683192770
	I0923 12:28:46.710711  670144 fix.go:229] Guest: 2024-09-23 12:28:46.68319277 +0000 UTC Remote: 2024-09-23 12:28:46.596946256 +0000 UTC m=+21.821646719 (delta=86.246514ms)
	I0923 12:28:46.710733  670144 fix.go:200] guest clock delta is within tolerance: 86.246514ms
	I0923 12:28:46.710738  670144 start.go:83] releasing machines lock for "addons-052630", held for 21.826289183s
	I0923 12:28:46.710760  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:28:46.711055  670144 main.go:141] libmachine: (addons-052630) Calling .GetIP
	I0923 12:28:46.713772  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.714188  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.714222  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.714387  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:28:46.714956  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:28:46.715183  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:28:46.715309  670144 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 12:28:46.715383  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:46.715446  670144 ssh_runner.go:195] Run: cat /version.json
	I0923 12:28:46.715472  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:46.718318  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.718628  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.718658  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.718683  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.718845  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:46.719062  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:46.719075  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.719096  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.719238  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:46.719257  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:46.719450  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:46.719450  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:28:46.719543  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:46.719701  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:28:46.832898  670144 ssh_runner.go:195] Run: systemctl --version
	I0923 12:28:46.838565  670144 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 12:28:46.993556  670144 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 12:28:46.999180  670144 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 12:28:46.999247  670144 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 12:28:47.014650  670144 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 12:28:47.014678  670144 start.go:495] detecting cgroup driver to use...
	I0923 12:28:47.014749  670144 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 12:28:47.031900  670144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:28:47.045836  670144 docker.go:217] disabling cri-docker service (if available) ...
	I0923 12:28:47.045894  670144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 12:28:47.059242  670144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 12:28:47.072860  670144 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 12:28:47.194879  670144 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 12:28:47.358066  670144 docker.go:233] disabling docker service ...
	I0923 12:28:47.358133  670144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 12:28:47.371586  670144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 12:28:47.384467  670144 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 12:28:47.500779  670144 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 12:28:47.617653  670144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 12:28:47.631869  670144 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:28:47.649294  670144 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 12:28:47.649381  670144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:28:47.659959  670144 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 12:28:47.660033  670144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:28:47.670550  670144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:28:47.680493  670144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:28:47.691259  670144 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 12:28:47.702167  670144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:28:47.712481  670144 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:28:47.729016  670144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:28:47.738741  670144 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 12:28:47.747902  670144 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 12:28:47.747976  670144 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 12:28:47.759825  670144 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 12:28:47.770483  670144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:28:47.890638  670144 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 12:28:47.979539  670144 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 12:28:47.979633  670144 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 12:28:47.984471  670144 start.go:563] Will wait 60s for crictl version
	I0923 12:28:47.984558  670144 ssh_runner.go:195] Run: which crictl
	I0923 12:28:47.988396  670144 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 12:28:48.030420  670144 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 12:28:48.030521  670144 ssh_runner.go:195] Run: crio --version
	I0923 12:28:48.056969  670144 ssh_runner.go:195] Run: crio --version
	I0923 12:28:48.087115  670144 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 12:28:48.088250  670144 main.go:141] libmachine: (addons-052630) Calling .GetIP
	I0923 12:28:48.091126  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:48.091525  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:48.091557  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:48.091833  670144 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 12:28:48.095821  670144 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:28:48.107261  670144 kubeadm.go:883] updating cluster {Name:addons-052630 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-052630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 12:28:48.107375  670144 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 12:28:48.107425  670144 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 12:28:48.137489  670144 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0923 12:28:48.137564  670144 ssh_runner.go:195] Run: which lz4
	I0923 12:28:48.141366  670144 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 12:28:48.145228  670144 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 12:28:48.145266  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0923 12:28:49.300797  670144 crio.go:462] duration metric: took 1.159457126s to copy over tarball
	I0923 12:28:49.300880  670144 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 12:28:51.403387  670144 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.10247438s)
	I0923 12:28:51.403418  670144 crio.go:469] duration metric: took 2.102584932s to extract the tarball
	I0923 12:28:51.403426  670144 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 12:28:51.439644  670144 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 12:28:51.487343  670144 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 12:28:51.487372  670144 cache_images.go:84] Images are preloaded, skipping loading
	I0923 12:28:51.487380  670144 kubeadm.go:934] updating node { 192.168.39.225 8443 v1.31.1 crio true true} ...
	I0923 12:28:51.487484  670144 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-052630 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-052630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 12:28:51.487549  670144 ssh_runner.go:195] Run: crio config
	I0923 12:28:51.529159  670144 cni.go:84] Creating CNI manager for ""
	I0923 12:28:51.529194  670144 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 12:28:51.529211  670144 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 12:28:51.529243  670144 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.225 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-052630 NodeName:addons-052630 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 12:28:51.529421  670144 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-052630"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 12:28:51.529489  670144 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 12:28:51.538786  670144 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 12:28:51.538860  670144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 12:28:51.547357  670144 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0923 12:28:51.563034  670144 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 12:28:51.579309  670144 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0923 12:28:51.595202  670144 ssh_runner.go:195] Run: grep 192.168.39.225	control-plane.minikube.internal$ /etc/hosts
	I0923 12:28:51.598885  670144 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.225	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:28:51.610214  670144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:28:51.733757  670144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:28:51.750735  670144 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630 for IP: 192.168.39.225
	I0923 12:28:51.750770  670144 certs.go:194] generating shared ca certs ...
	I0923 12:28:51.750794  670144 certs.go:226] acquiring lock for ca certs: {Name:mk5f47b34d40554f07f6507fea971236e4735d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:51.751013  670144 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key
	I0923 12:28:51.991610  670144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt ...
	I0923 12:28:51.991645  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt: {Name:mk278617102c801f9caeeac933d8c272fa433146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:51.991889  670144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key ...
	I0923 12:28:51.991905  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key: {Name:mk95fd2f326ff7501892adf485a2ad45653eea64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:51.992016  670144 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key
	I0923 12:28:52.107448  670144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt ...
	I0923 12:28:52.107483  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt: {Name:mkab8a60190e4e6c41e7af4f15f6ef17b87ed124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:52.107687  670144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key ...
	I0923 12:28:52.107702  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key: {Name:mk02e351bcbba1d3a2fba48c9faa8507f1dc7f2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:52.107800  670144 certs.go:256] generating profile certs ...
	I0923 12:28:52.107883  670144 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.key
	I0923 12:28:52.107915  670144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt with IP's: []
	I0923 12:28:52.582241  670144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt ...
	I0923 12:28:52.582281  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: {Name:mkaf7ea4dbed68876d268afef229ce386755abe4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:52.582498  670144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.key ...
	I0923 12:28:52.582514  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.key: {Name:mkdce34cb498d97b74470517b32fdf3aa826f879 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:52.582615  670144 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.key.4809edca
	I0923 12:28:52.582638  670144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.crt.4809edca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.225]
	I0923 12:28:52.768950  670144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.crt.4809edca ...
	I0923 12:28:52.768994  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.crt.4809edca: {Name:mkbaa634fbd0b311944b39e34f00f96971e7ce59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:52.769251  670144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.key.4809edca ...
	I0923 12:28:52.769274  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.key.4809edca: {Name:mkf94e3b64c79f3950341d5ac1c59fe9bdbc9286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:52.769399  670144 certs.go:381] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.crt.4809edca -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.crt
	I0923 12:28:52.769586  670144 certs.go:385] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.key.4809edca -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.key
	I0923 12:28:52.769706  670144 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/proxy-client.key
	I0923 12:28:52.769730  670144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/proxy-client.crt with IP's: []
	I0923 12:28:52.993061  670144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/proxy-client.crt ...
	I0923 12:28:52.993100  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/proxy-client.crt: {Name:mkc6749530eb8ff541e082b9ac5787b31147fda9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:52.993317  670144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/proxy-client.key ...
	I0923 12:28:52.993335  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/proxy-client.key: {Name:mk1f12283a82c9b262b0a92c2d76e010fb6f0100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:52.993550  670144 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 12:28:52.993587  670144 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem (1082 bytes)
	I0923 12:28:52.993614  670144 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem (1123 bytes)
	I0923 12:28:52.993635  670144 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem (1675 bytes)
	I0923 12:28:52.994363  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 12:28:53.025659  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 12:28:53.052117  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 12:28:53.077309  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 12:28:53.103143  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 12:28:53.126620  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 12:28:53.149963  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 12:28:53.173855  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 12:28:53.197238  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 12:28:53.220421  670144 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 12:28:53.236569  670144 ssh_runner.go:195] Run: openssl version
	I0923 12:28:53.242319  670144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 12:28:53.253251  670144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:28:53.257949  670144 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 12:28 /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:28:53.258030  670144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:28:53.264286  670144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 12:28:53.275223  670144 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 12:28:53.279442  670144 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 12:28:53.279513  670144 kubeadm.go:392] StartCluster: {Name:addons-052630 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-052630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:28:53.279600  670144 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 12:28:53.279685  670144 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 12:28:53.314839  670144 cri.go:89] found id: ""
	I0923 12:28:53.314909  670144 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 12:28:53.327186  670144 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 12:28:53.336989  670144 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 12:28:53.361585  670144 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 12:28:53.361612  670144 kubeadm.go:157] found existing configuration files:
	
	I0923 12:28:53.361662  670144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 12:28:53.381977  670144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 12:28:53.382054  670144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 12:28:53.392118  670144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 12:28:53.401098  670144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 12:28:53.401165  670144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 12:28:53.410993  670144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 12:28:53.420212  670144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 12:28:53.420273  670144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 12:28:53.429796  670144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 12:28:53.439423  670144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 12:28:53.439499  670144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 12:28:53.449163  670144 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 12:28:53.502584  670144 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 12:28:53.502741  670144 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 12:28:53.605559  670144 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 12:28:53.605689  670144 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 12:28:53.605816  670144 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 12:28:53.618515  670144 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 12:28:53.836787  670144 out.go:235]   - Generating certificates and keys ...
	I0923 12:28:53.836912  670144 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 12:28:53.836995  670144 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 12:28:53.873040  670144 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 12:28:54.032114  670144 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 12:28:54.141767  670144 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 12:28:54.255622  670144 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 12:28:54.855891  670144 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 12:28:54.856105  670144 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-052630 localhost] and IPs [192.168.39.225 127.0.0.1 ::1]
	I0923 12:28:55.008507  670144 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 12:28:55.008690  670144 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-052630 localhost] and IPs [192.168.39.225 127.0.0.1 ::1]
	I0923 12:28:55.205727  670144 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 12:28:55.375985  670144 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 12:28:55.604036  670144 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 12:28:55.604271  670144 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 12:28:55.664982  670144 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 12:28:55.716232  670144 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 12:28:55.974342  670144 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 12:28:56.056044  670144 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 12:28:56.242837  670144 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 12:28:56.243301  670144 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 12:28:56.245752  670144 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 12:28:56.248113  670144 out.go:235]   - Booting up control plane ...
	I0923 12:28:56.248255  670144 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 12:28:56.248368  670144 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 12:28:56.248457  670144 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 12:28:56.267013  670144 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 12:28:56.273131  670144 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 12:28:56.273201  670144 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 12:28:56.405616  670144 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 12:28:56.405814  670144 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 12:28:57.405800  670144 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001202262s
	I0923 12:28:57.405948  670144 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 12:29:02.406200  670144 kubeadm.go:310] [api-check] The API server is healthy after 5.001766702s
	I0923 12:29:02.416901  670144 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 12:29:02.435826  670144 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 12:29:02.465176  670144 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 12:29:02.465450  670144 kubeadm.go:310] [mark-control-plane] Marking the node addons-052630 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 12:29:02.478428  670144 kubeadm.go:310] [bootstrap-token] Using token: 6nlf9d.x8d4dbn01qyxu2me
	I0923 12:29:02.480122  670144 out.go:235]   - Configuring RBAC rules ...
	I0923 12:29:02.480273  670144 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 12:29:02.484831  670144 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 12:29:02.498051  670144 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 12:29:02.506535  670144 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 12:29:02.510753  670144 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 12:29:02.514110  670144 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 12:29:02.816841  670144 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 12:29:03.265469  670144 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 12:29:03.814814  670144 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 12:29:03.815665  670144 kubeadm.go:310] 
	I0923 12:29:03.815740  670144 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 12:29:03.815754  670144 kubeadm.go:310] 
	I0923 12:29:03.815856  670144 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 12:29:03.815884  670144 kubeadm.go:310] 
	I0923 12:29:03.815943  670144 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 12:29:03.816033  670144 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 12:29:03.816112  670144 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 12:29:03.816122  670144 kubeadm.go:310] 
	I0923 12:29:03.816205  670144 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 12:29:03.816220  670144 kubeadm.go:310] 
	I0923 12:29:03.816283  670144 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 12:29:03.816292  670144 kubeadm.go:310] 
	I0923 12:29:03.816361  670144 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 12:29:03.816459  670144 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 12:29:03.816557  670144 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 12:29:03.816565  670144 kubeadm.go:310] 
	I0923 12:29:03.816662  670144 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 12:29:03.816807  670144 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 12:29:03.816828  670144 kubeadm.go:310] 
	I0923 12:29:03.816928  670144 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6nlf9d.x8d4dbn01qyxu2me \
	I0923 12:29:03.817053  670144 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fc29dc81bde6bbaef9ddbc91342eaa216189e2d814cc53e215aada75bebb1ff \
	I0923 12:29:03.817087  670144 kubeadm.go:310] 	--control-plane 
	I0923 12:29:03.817098  670144 kubeadm.go:310] 
	I0923 12:29:03.817208  670144 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 12:29:03.817218  670144 kubeadm.go:310] 
	I0923 12:29:03.817336  670144 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6nlf9d.x8d4dbn01qyxu2me \
	I0923 12:29:03.817491  670144 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fc29dc81bde6bbaef9ddbc91342eaa216189e2d814cc53e215aada75bebb1ff 
	I0923 12:29:03.818641  670144 kubeadm.go:310] W0923 12:28:53.480461     822 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:29:03.818988  670144 kubeadm.go:310] W0923 12:28:53.482044     822 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:29:03.819085  670144 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 12:29:03.819100  670144 cni.go:84] Creating CNI manager for ""
	I0923 12:29:03.819107  670144 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 12:29:03.821098  670144 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 12:29:03.822568  670144 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 12:29:03.832801  670144 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 12:29:03.849124  670144 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 12:29:03.849234  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:03.849289  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-052630 minikube.k8s.io/updated_at=2024_09_23T12_29_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=addons-052630 minikube.k8s.io/primary=true
	I0923 12:29:03.869073  670144 ops.go:34] apiserver oom_adj: -16
	I0923 12:29:03.987718  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:04.487902  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:04.988414  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:05.488480  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:05.988814  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:06.488344  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:06.987998  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:07.487981  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:07.987977  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:08.098139  670144 kubeadm.go:1113] duration metric: took 4.248990269s to wait for elevateKubeSystemPrivileges
	I0923 12:29:08.098178  670144 kubeadm.go:394] duration metric: took 14.818670797s to StartCluster
	I0923 12:29:08.098199  670144 settings.go:142] acquiring lock: {Name:mk3da09e51125fc906a9e1276ab490fc7b26b03f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:29:08.098319  670144 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 12:29:08.098684  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/kubeconfig: {Name:mk213d38080414fbe499f6509d2653fd99103348 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:29:08.098883  670144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 12:29:08.098897  670144 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:29:08.098959  670144 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 12:29:08.099099  670144 addons.go:69] Setting yakd=true in profile "addons-052630"
	I0923 12:29:08.099104  670144 config.go:182] Loaded profile config "addons-052630": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:29:08.099133  670144 addons.go:234] Setting addon yakd=true in "addons-052630"
	I0923 12:29:08.099140  670144 addons.go:69] Setting inspektor-gadget=true in profile "addons-052630"
	I0923 12:29:08.099148  670144 addons.go:69] Setting default-storageclass=true in profile "addons-052630"
	I0923 12:29:08.099155  670144 addons.go:69] Setting ingress=true in profile "addons-052630"
	I0923 12:29:08.099164  670144 addons.go:69] Setting metrics-server=true in profile "addons-052630"
	I0923 12:29:08.099174  670144 addons.go:69] Setting cloud-spanner=true in profile "addons-052630"
	I0923 12:29:08.099179  670144 addons.go:234] Setting addon ingress=true in "addons-052630"
	I0923 12:29:08.099186  670144 addons.go:234] Setting addon metrics-server=true in "addons-052630"
	I0923 12:29:08.099174  670144 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-052630"
	I0923 12:29:08.099213  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.099168  670144 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-052630"
	I0923 12:29:08.099224  670144 addons.go:69] Setting storage-provisioner=true in profile "addons-052630"
	I0923 12:29:08.099247  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.099248  670144 addons.go:234] Setting addon storage-provisioner=true in "addons-052630"
	I0923 12:29:08.099178  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.099297  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.099185  670144 addons.go:69] Setting volcano=true in profile "addons-052630"
	I0923 12:29:08.099407  670144 addons.go:234] Setting addon volcano=true in "addons-052630"
	I0923 12:29:08.099456  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.099684  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.099696  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.099705  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.099709  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.099726  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.099123  670144 addons.go:69] Setting ingress-dns=true in profile "addons-052630"
	I0923 12:29:08.099728  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.099737  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.099739  670144 addons.go:234] Setting addon ingress-dns=true in "addons-052630"
	I0923 12:29:08.099769  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.099797  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.099133  670144 addons.go:69] Setting registry=true in profile "addons-052630"
	I0923 12:29:08.099726  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.099823  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.099158  670144 addons.go:234] Setting addon inspektor-gadget=true in "addons-052630"
	I0923 12:29:08.099199  670144 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-052630"
	I0923 12:29:08.099850  670144 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-052630"
	I0923 12:29:08.099824  670144 addons.go:234] Setting addon registry=true in "addons-052630"
	I0923 12:29:08.099189  670144 addons.go:234] Setting addon cloud-spanner=true in "addons-052630"
	I0923 12:29:08.099150  670144 addons.go:69] Setting gcp-auth=true in profile "addons-052630"
	I0923 12:29:08.099904  670144 mustload.go:65] Loading cluster: addons-052630
	I0923 12:29:08.099944  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.099191  670144 addons.go:69] Setting volumesnapshots=true in profile "addons-052630"
	I0923 12:29:08.099995  670144 addons.go:234] Setting addon volumesnapshots=true in "addons-052630"
	I0923 12:29:08.100023  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.100047  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.100072  670144 config.go:182] Loaded profile config "addons-052630": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:29:08.100106  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.100108  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.100138  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.100335  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.100357  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.100427  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.100433  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.100447  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.100452  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.100507  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.100524  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.100027  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.100940  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.100978  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.099218  670144 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-052630"
	I0923 12:29:08.101095  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.101121  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.099193  670144 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-052630"
	I0923 12:29:08.101287  670144 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-052630"
	I0923 12:29:08.101320  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.101767  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.101789  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.099835  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.103920  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.110406  670144 out.go:177] * Verifying Kubernetes components...
	I0923 12:29:08.119535  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.119599  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.120427  670144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:29:08.121315  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40063
	I0923 12:29:08.131609  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43929
	I0923 12:29:08.131626  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45319
	I0923 12:29:08.131667  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.131728  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45601
	I0923 12:29:08.131769  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34841
	I0923 12:29:08.132495  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.132503  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.132728  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40563
	I0923 12:29:08.132745  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.132750  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.132759  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.133032  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.133052  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.133306  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.133386  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.133413  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.133429  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.133440  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.133482  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.133740  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.133761  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.133851  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.134081  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.134103  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.134261  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.134297  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.134429  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.134444  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.134456  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.134491  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.134545  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.134840  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.135147  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.135183  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.135520  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.135605  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.136217  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.136235  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.136747  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.137331  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.137369  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.164109  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40475
	I0923 12:29:08.164380  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45105
	I0923 12:29:08.164631  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.164825  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.165148  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.165170  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.165570  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.165782  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.165803  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.165872  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.166203  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.166826  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.166869  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.167521  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46661
	I0923 12:29:08.169501  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35785
	I0923 12:29:08.174598  670144 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-052630"
	I0923 12:29:08.178846  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.179076  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.178895  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37643
	I0923 12:29:08.178930  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36927
	I0923 12:29:08.178972  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35213
	I0923 12:29:08.178981  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46573
	I0923 12:29:08.178989  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43415
	I0923 12:29:08.179006  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38507
	I0923 12:29:08.179011  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37487
	I0923 12:29:08.180724  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.181079  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.181494  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.181522  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.181629  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.182366  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.182449  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.182465  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.182959  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.183025  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.183079  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.183168  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.183230  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.184031  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.184134  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.184154  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.184166  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.184243  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.184292  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.184307  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.184322  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.184439  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.184449  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.184993  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.185059  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.185103  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.185104  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.185125  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.185195  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.185234  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.185246  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.185293  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.185354  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35821
	I0923 12:29:08.185636  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.185676  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.186611  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.186677  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.186857  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.187550  670144 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 12:29:08.187925  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.187956  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.188199  670144 addons.go:234] Setting addon default-storageclass=true in "addons-052630"
	I0923 12:29:08.188242  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.188598  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.188651  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.188880  670144 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 12:29:08.188903  670144 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 12:29:08.188923  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.189126  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.189189  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.189258  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.189738  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.191347  670144 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 12:29:08.191425  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.193271  670144 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 12:29:08.193533  670144 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:29:08.193553  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 12:29:08.193574  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.193841  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.193953  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.194007  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.194283  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.194821  670144 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 12:29:08.194839  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 12:29:08.194858  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.195552  670144 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 12:29:08.195768  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.195845  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37407
	I0923 12:29:08.196376  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34303
	I0923 12:29:08.196521  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.196672  670144 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 12:29:08.196691  670144 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 12:29:08.196719  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.197056  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.197598  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.197684  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.197702  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.198047  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.198072  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.198113  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.198266  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.198283  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.198479  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.198489  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.198547  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.198664  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.198771  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.198953  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.198987  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.199210  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.199249  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.199775  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.199959  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.202164  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.202238  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.202474  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.202495  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.202578  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.202596  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.203141  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.203337  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.203517  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.203558  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.203645  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.203720  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.203863  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.203890  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.204069  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.204122  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.204301  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.204456  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.204512  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.204526  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.204686  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.204802  670144 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 12:29:08.204956  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.205170  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.205332  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.205461  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.206267  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.206285  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.206516  670144 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 12:29:08.206532  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 12:29:08.206551  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.206706  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.207377  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.207419  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.208406  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46335
	I0923 12:29:08.209619  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.210047  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.210073  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.210236  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.210426  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.210566  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.210684  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.219445  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35443
	I0923 12:29:08.219533  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46251
	I0923 12:29:08.219589  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41147
	I0923 12:29:08.220785  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36417
	I0923 12:29:08.222697  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45277
	I0923 12:29:08.225038  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39561
	I0923 12:29:08.230680  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.230751  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.231036  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.231200  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.231237  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.231376  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.231767  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.231972  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.231987  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.233085  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.233089  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.233147  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.233211  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.233227  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.233345  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.233361  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.233363  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.233373  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.233375  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.233386  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.233880  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.233899  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.233917  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.233942  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.233992  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.234058  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.234091  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.234676  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.234695  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.234731  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.234771  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.234892  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.235047  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.235091  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.235382  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.235459  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.236193  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.236849  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.236900  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.238129  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.238450  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.238525  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.238905  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:08.238923  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:08.239076  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:08.239089  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:08.239099  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:08.239108  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:08.239201  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.240929  670144 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 12:29:08.240995  670144 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 12:29:08.241278  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:08.242787  670144 main.go:141] libmachine: Failed to make call to close driver server: unexpected EOF
	I0923 12:29:08.242806  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	W0923 12:29:08.242897  670144 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0923 12:29:08.242950  670144 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 12:29:08.243197  670144 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 12:29:08.243226  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 12:29:08.243249  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.244528  670144 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 12:29:08.246261  670144 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 12:29:08.246338  670144 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 12:29:08.248195  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.248288  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.248307  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.248324  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.248538  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.248670  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.248779  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.250051  670144 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 12:29:08.250094  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 12:29:08.250119  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.251740  670144 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 12:29:08.253185  670144 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 12:29:08.253489  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.254182  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.254209  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.254598  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.254820  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.255024  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.255199  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.255972  670144 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 12:29:08.256311  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38947
	I0923 12:29:08.256884  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.256951  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
	I0923 12:29:08.257532  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.257556  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.257657  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.258214  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.258239  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.258317  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.258515  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.258635  670144 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 12:29:08.259348  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.259794  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.260013  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44129
	I0923 12:29:08.260784  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.260900  670144 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 12:29:08.261518  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.262280  670144 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 12:29:08.262305  670144 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 12:29:08.262329  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.263111  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33533
	I0923 12:29:08.263125  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.263182  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.263211  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.263259  670144 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 12:29:08.263553  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.263921  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.264090  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.264286  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.264224  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.264779  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.264968  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.266052  670144 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 12:29:08.266086  670144 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 12:29:08.266718  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.266760  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.267350  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.267376  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.267443  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.267645  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.267821  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.268028  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.268401  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.268717  670144 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 12:29:08.268738  670144 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 12:29:08.268757  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.269685  670144 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 12:29:08.269698  670144 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 12:29:08.270652  670144 out.go:177]   - Using image docker.io/busybox:stable
	I0923 12:29:08.271437  670144 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 12:29:08.271460  670144 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 12:29:08.271489  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.271705  670144 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 12:29:08.271764  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 12:29:08.271806  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.271995  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.272341  670144 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 12:29:08.272361  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 12:29:08.272378  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.274161  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.274186  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.274494  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.274772  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.274952  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.275114  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.275804  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.275823  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.276398  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.276424  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.276437  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.276506  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.276618  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.276764  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.276814  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.276970  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.276988  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.277148  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.277311  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.277371  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37819
	I0923 12:29:08.277484  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.277856  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.277961  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.278476  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.278486  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.278532  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.278534  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.278618  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.278754  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.278860  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.278893  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.278987  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.279199  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.280614  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	W0923 12:29:08.281601  670144 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40984->192.168.39.225:22: read: connection reset by peer
	I0923 12:29:08.281629  670144 retry.go:31] will retry after 168.892195ms: ssh: handshake failed: read tcp 192.168.39.1:40984->192.168.39.225:22: read: connection reset by peer
	I0923 12:29:08.282699  670144 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 12:29:08.283895  670144 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 12:29:08.283910  670144 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 12:29:08.283931  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.286545  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.286945  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.286960  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.287159  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.287298  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.287395  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.287501  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	W0923 12:29:08.451555  670144 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:41002->192.168.39.225:22: read: connection reset by peer
	I0923 12:29:08.451611  670144 retry.go:31] will retry after 370.404405ms: ssh: handshake failed: read tcp 192.168.39.1:41002->192.168.39.225:22: read: connection reset by peer
	I0923 12:29:08.501288  670144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:29:08.501333  670144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 12:29:08.574946  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:29:08.650848  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 12:29:08.710883  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 12:29:08.718226  670144 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 12:29:08.718254  670144 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 12:29:08.724979  670144 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 12:29:08.725012  670144 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 12:29:08.729985  670144 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 12:29:08.730007  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 12:29:08.749343  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 12:29:08.759919  670144 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 12:29:08.759951  670144 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 12:29:08.762704  670144 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 12:29:08.762725  670144 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 12:29:08.780285  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 12:29:08.797085  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 12:29:08.819576  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 12:29:08.871295  670144 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 12:29:08.871331  670144 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 12:29:08.873395  670144 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 12:29:08.873415  670144 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 12:29:08.913764  670144 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 12:29:08.913797  670144 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 12:29:08.953695  670144 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 12:29:08.953730  670144 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 12:29:08.989719  670144 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 12:29:08.989745  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 12:29:09.174275  670144 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 12:29:09.174311  670144 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 12:29:09.209701  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 12:29:09.213032  670144 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 12:29:09.213062  670144 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 12:29:09.235662  670144 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 12:29:09.235711  670144 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 12:29:09.249524  670144 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 12:29:09.249560  670144 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 12:29:09.318365  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 12:29:09.380514  670144 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 12:29:09.380546  670144 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 12:29:09.396450  670144 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 12:29:09.396479  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 12:29:09.491655  670144 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 12:29:09.491699  670144 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 12:29:09.507296  670144 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 12:29:09.507325  670144 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 12:29:09.619384  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 12:29:09.674496  670144 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 12:29:09.674532  670144 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 12:29:09.791378  670144 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 12:29:09.791409  670144 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 12:29:09.916463  670144 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 12:29:09.916518  670144 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 12:29:10.095369  670144 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 12:29:10.095403  670144 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 12:29:10.151495  670144 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 12:29:10.151529  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 12:29:10.341472  670144 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 12:29:10.341505  670144 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 12:29:10.355580  670144 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 12:29:10.355613  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 12:29:10.419301  670144 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 12:29:10.419334  670144 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 12:29:10.525480  670144 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 12:29:10.525516  670144 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 12:29:10.591491  670144 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 12:29:10.591518  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 12:29:10.598636  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 12:29:10.676043  670144 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.174707084s)
	I0923 12:29:10.676099  670144 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.174727254s)
	I0923 12:29:10.676164  670144 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0923 12:29:10.677107  670144 node_ready.go:35] waiting up to 6m0s for node "addons-052630" to be "Ready" ...
	I0923 12:29:10.681243  670144 node_ready.go:49] node "addons-052630" has status "Ready":"True"
	I0923 12:29:10.681278  670144 node_ready.go:38] duration metric: took 4.144676ms for node "addons-052630" to be "Ready" ...
	I0923 12:29:10.681290  670144 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:29:10.697913  670144 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cvw7x" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:10.820653  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 12:29:10.825588  670144 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 12:29:10.825612  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 12:29:11.166886  670144 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 12:29:11.166909  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 12:29:11.180409  670144 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-052630" context rescaled to 1 replicas
	I0923 12:29:11.447351  670144 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 12:29:11.447384  670144 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 12:29:11.721490  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 12:29:12.078341  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.427447212s)
	I0923 12:29:12.078414  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:12.078429  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:12.078443  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.503450542s)
	I0923 12:29:12.078485  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:12.078498  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:12.078823  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:12.078831  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:12.078854  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:12.078856  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:12.078863  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:12.078868  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:12.078871  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:12.078878  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:12.078891  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:12.079227  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:12.079263  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:12.079271  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:12.079315  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:12.079335  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:12.079341  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:12.803456  670144 pod_ready.go:103] pod "coredns-7c65d6cfc9-cvw7x" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:13.600807  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.889878058s)
	I0923 12:29:13.600875  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:13.600825  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.851443065s)
	I0923 12:29:13.600943  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:13.600962  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:13.600888  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:13.600895  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.820571857s)
	I0923 12:29:13.601061  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:13.601070  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:13.601238  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:13.601278  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:13.601285  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:13.601270  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:13.601304  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:13.601315  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:13.601328  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:13.601293  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:13.601389  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:13.601391  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:13.601429  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:13.601437  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:13.601449  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:13.601455  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:13.601954  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:13.602020  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:13.602042  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:13.602063  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:13.602072  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:13.602294  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:13.602306  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:13.603331  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:13.603349  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:13.801670  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:13.801695  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:13.802002  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:13.802041  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	W0923 12:29:13.802159  670144 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0923 12:29:13.880403  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:13.880433  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:13.880754  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:13.880776  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:13.880836  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:14.235264  670144 pod_ready.go:93] pod "coredns-7c65d6cfc9-cvw7x" in "kube-system" namespace has status "Ready":"True"
	I0923 12:29:14.235297  670144 pod_ready.go:82] duration metric: took 3.537339059s for pod "coredns-7c65d6cfc9-cvw7x" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:14.235308  670144 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v7dmc" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:14.291401  670144 pod_ready.go:93] pod "coredns-7c65d6cfc9-v7dmc" in "kube-system" namespace has status "Ready":"True"
	I0923 12:29:14.291428  670144 pod_ready.go:82] duration metric: took 56.113983ms for pod "coredns-7c65d6cfc9-v7dmc" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:14.291438  670144 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-052630" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:15.285912  670144 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 12:29:15.285962  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:15.289442  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:15.289901  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:15.289933  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:15.290206  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:15.290456  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:15.290643  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:15.290816  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:15.584286  670144 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 12:29:15.772056  670144 addons.go:234] Setting addon gcp-auth=true in "addons-052630"
	I0923 12:29:15.772177  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:15.772565  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:15.772604  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:15.789694  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46135
	I0923 12:29:15.790390  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:15.790928  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:15.790953  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:15.791398  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:15.791922  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:15.791974  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:15.808522  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43819
	I0923 12:29:15.809129  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:15.809845  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:15.809875  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:15.810306  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:15.810586  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:15.812642  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:15.812962  670144 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 12:29:15.812999  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:15.816164  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:15.816654  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:15.816681  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:15.816904  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:15.817091  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:15.817236  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:15.817376  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:15.891555  670144 pod_ready.go:93] pod "etcd-addons-052630" in "kube-system" namespace has status "Ready":"True"
	I0923 12:29:15.891581  670144 pod_ready.go:82] duration metric: took 1.60013549s for pod "etcd-addons-052630" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:15.891591  670144 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-052630" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:15.987597  670144 pod_ready.go:93] pod "kube-apiserver-addons-052630" in "kube-system" namespace has status "Ready":"True"
	I0923 12:29:15.987625  670144 pod_ready.go:82] duration metric: took 96.027461ms for pod "kube-apiserver-addons-052630" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:15.987635  670144 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-052630" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:16.145156  670144 pod_ready.go:93] pod "kube-controller-manager-addons-052630" in "kube-system" namespace has status "Ready":"True"
	I0923 12:29:16.145181  670144 pod_ready.go:82] duration metric: took 157.538978ms for pod "kube-controller-manager-addons-052630" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:16.145191  670144 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vn9km" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:16.318509  670144 pod_ready.go:93] pod "kube-proxy-vn9km" in "kube-system" namespace has status "Ready":"True"
	I0923 12:29:16.318542  670144 pod_ready.go:82] duration metric: took 173.342123ms for pod "kube-proxy-vn9km" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:16.318556  670144 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-052630" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:16.367647  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.570518238s)
	I0923 12:29:16.367707  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.548102227s)
	I0923 12:29:16.367717  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.367731  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.367736  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.367751  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.367955  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.158101812s)
	I0923 12:29:16.368015  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.368031  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.368190  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.368220  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.368221  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.368223  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.368320  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.368344  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.368231  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.368372  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.368380  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.368401  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.74898188s)
	I0923 12:29:16.368253  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.368427  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.368432  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.368436  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.368440  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.368446  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.368565  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.769896333s)
	I0923 12:29:16.368589  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.368597  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.368664  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.368679  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.368279  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.368699  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.368353  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.369082  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.369131  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.369155  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.369160  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.369167  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.369173  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.369248  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.369265  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.369295  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.369301  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.369309  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.369315  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.370458  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.370480  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.370493  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.370494  670144 addons.go:475] Verifying addon registry=true in "addons-052630"
	I0923 12:29:16.370783  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.370808  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.370815  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.371296  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.371308  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.371446  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.371466  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.371473  670144 addons.go:475] Verifying addon ingress=true in "addons-052630"
	I0923 12:29:16.372129  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.053719131s)
	I0923 12:29:16.372181  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.372203  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.372468  670144 out.go:177] * Verifying registry addon...
	I0923 12:29:16.372506  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.372533  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.373064  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.373074  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.373084  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.372536  670144 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-052630 service yakd-dashboard -n yakd-dashboard
	
	I0923 12:29:16.373416  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.373455  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.373463  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.373482  670144 addons.go:475] Verifying addon metrics-server=true in "addons-052630"
	I0923 12:29:16.373548  670144 out.go:177] * Verifying ingress addon...
	I0923 12:29:16.376859  670144 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 12:29:16.377235  670144 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 12:29:16.403137  670144 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 12:29:16.403166  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:16.404545  670144 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 12:29:16.404577  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:16.413711  670144 pod_ready.go:93] pod "kube-scheduler-addons-052630" in "kube-system" namespace has status "Ready":"True"
	I0923 12:29:16.413735  670144 pod_ready.go:82] duration metric: took 95.170893ms for pod "kube-scheduler-addons-052630" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:16.413745  670144 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:16.687574  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.866859653s)
	W0923 12:29:16.687654  670144 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 12:29:16.687692  670144 retry.go:31] will retry after 205.184874ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 12:29:16.893570  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 12:29:17.115140  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:17.115729  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:17.396617  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:17.396842  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:17.889967  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:17.890486  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:17.896395  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.174848485s)
	I0923 12:29:17.896449  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:17.896460  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:17.896462  670144 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.083466495s)
	I0923 12:29:17.896747  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:17.896804  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:17.896821  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:17.896830  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:17.897120  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:17.897136  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:17.897147  670144 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-052630"
	I0923 12:29:17.898347  670144 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 12:29:17.898446  670144 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 12:29:17.899858  670144 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 12:29:17.900628  670144 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 12:29:17.901271  670144 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 12:29:17.901295  670144 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 12:29:17.940858  670144 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 12:29:17.940896  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:17.996704  670144 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 12:29:17.996735  670144 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 12:29:18.047586  670144 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 12:29:18.047614  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 12:29:18.096484  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 12:29:18.185732  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.292020776s)
	I0923 12:29:18.185806  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:18.185838  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:18.186138  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:18.186158  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:18.186169  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:18.186177  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:18.186426  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:18.186447  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:18.387863  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:18.388256  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:18.406385  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:18.421720  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:18.882500  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:18.882785  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:18.905191  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:19.387726  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:19.388481  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:19.411200  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:19.581790  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.485262596s)
	I0923 12:29:19.581873  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:19.581891  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:19.582219  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:19.582276  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:19.582301  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:19.582317  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:19.582328  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:19.582590  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:19.582647  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:19.582672  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:19.584672  670144 addons.go:475] Verifying addon gcp-auth=true in "addons-052630"
	I0923 12:29:19.586440  670144 out.go:177] * Verifying gcp-auth addon...
	I0923 12:29:19.589206  670144 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 12:29:19.620640  670144 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 12:29:19.620668  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:19.886738  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:19.890925  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:19.912686  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:20.096746  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:20.392258  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:20.393710  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:20.407449  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:20.593567  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:20.881568  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:20.881815  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:20.905516  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:20.920340  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:21.093740  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:21.384843  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:21.384987  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:21.405282  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:21.592541  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:21.884592  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:21.885028  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:21.908345  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:22.093490  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:22.386941  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:22.387161  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:22.404796  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:22.592403  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:22.881616  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:22.881661  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:22.905343  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:23.093177  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:23.384666  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:23.386163  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:23.426576  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:23.487848  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:23.592494  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:23.882714  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:23.883358  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:23.906870  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:24.092492  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:24.382319  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:24.382983  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:24.407140  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:24.593539  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:24.882594  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:24.883125  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:24.905274  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:25.092842  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:25.382809  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:25.382812  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:25.406742  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:25.593227  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:25.884510  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:25.888982  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:25.905898  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:25.927041  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:26.093083  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:26.381626  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:26.382291  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:26.405944  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:26.592774  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:26.882136  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:26.882387  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:26.904852  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:27.093581  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:27.382186  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:27.382448  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:27.405778  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:27.593357  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:27.884042  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:27.884439  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:27.985517  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:28.092766  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:28.381805  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:28.381982  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:28.405524  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:28.424581  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:28.592693  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:28.882335  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:28.882461  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:28.905150  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:29.093790  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:29.381852  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:29.381930  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:29.406197  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:29.593870  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:29.882541  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:29.882798  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:29.905474  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:30.093606  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:30.382135  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:30.382392  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:30.404887  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:30.592667  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:30.881745  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:30.881985  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:30.907119  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:30.923733  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:31.093218  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:31.381583  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:31.381644  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:31.405219  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:31.593141  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:31.881719  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:31.882449  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:31.905985  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:32.093520  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:32.381819  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:32.382499  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:32.406447  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:32.592822  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:32.883086  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:32.883410  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:32.904975  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:33.093110  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:33.381891  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:33.383762  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:33.407942  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:33.422107  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:33.593115  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:33.881264  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:33.881728  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:33.906608  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:34.093572  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:34.381552  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:34.382128  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:34.405613  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:34.592996  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:34.882206  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:34.882652  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:34.907227  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:35.092746  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:35.381896  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:35.382256  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:35.405744  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:35.593906  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:35.882021  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:35.882250  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:35.905757  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:35.919545  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:36.093133  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:36.381087  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:36.381911  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:36.405918  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:36.593023  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:36.880871  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:36.881484  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:36.905513  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:37.093228  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:37.381359  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:37.382168  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:37.404758  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:37.592991  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:37.883706  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:37.884057  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:37.905951  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:37.921061  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:38.095579  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:38.381352  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:38.382050  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:38.406732  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:38.592418  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:38.882769  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:38.884781  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:38.909673  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:39.092517  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:39.384210  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:39.385066  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:39.405577  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:39.592411  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:39.882233  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:39.882964  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:39.905696  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:39.921969  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:40.092984  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:40.382732  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:40.383202  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:40.405785  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:40.593074  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:40.882030  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:40.882422  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:40.904994  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:41.093877  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:41.383225  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:41.383328  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:41.405996  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:41.593221  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:41.881622  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:41.881736  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:41.905316  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:42.093230  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:42.382510  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:42.382663  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:42.405377  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:42.419518  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:42.592420  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:42.880988  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:42.881203  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:42.906415  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:43.092742  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:43.382514  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:43.383733  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:43.719884  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:43.720755  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:43.888232  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:43.889178  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:43.904914  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:44.094101  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:44.383060  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:44.383829  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:44.405971  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:44.592595  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:44.887366  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:44.887955  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:44.906306  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:44.922735  670144 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"True"
	I0923 12:29:44.922765  670144 pod_ready.go:82] duration metric: took 28.50901084s for pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:44.922773  670144 pod_ready.go:39] duration metric: took 34.241469342s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:29:44.922792  670144 api_server.go:52] waiting for apiserver process to appear ...
	I0923 12:29:44.922851  670144 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:29:44.942826  670144 api_server.go:72] duration metric: took 36.843890873s to wait for apiserver process to appear ...
	I0923 12:29:44.942854  670144 api_server.go:88] waiting for apiserver healthz status ...
	I0923 12:29:44.942876  670144 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I0923 12:29:44.947699  670144 api_server.go:279] https://192.168.39.225:8443/healthz returned 200:
	ok
	I0923 12:29:44.948883  670144 api_server.go:141] control plane version: v1.31.1
	I0923 12:29:44.948908  670144 api_server.go:131] duration metric: took 6.047956ms to wait for apiserver health ...
	I0923 12:29:44.948917  670144 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 12:29:44.958208  670144 system_pods.go:59] 17 kube-system pods found
	I0923 12:29:44.958245  670144 system_pods.go:61] "coredns-7c65d6cfc9-cvw7x" [3de8bd3c-0baf-459b-94f8-f5d52ef1286d] Running
	I0923 12:29:44.958253  670144 system_pods.go:61] "csi-hostpath-attacher-0" [4c3e1f51-c4eb-4fa0-ab09-335efd2aa843] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 12:29:44.958259  670144 system_pods.go:61] "csi-hostpath-resizer-0" [e4676deb-26a8-4a3c-87ac-a226db6563ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 12:29:44.958271  670144 system_pods.go:61] "csi-hostpathplugin-jd2lw" [feb3c94a-858a-4f61-a148-8b64dcfd0934] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 12:29:44.958276  670144 system_pods.go:61] "etcd-addons-052630" [ecb6248b-7e04-4747-946a-eb8fc976147e] Running
	I0923 12:29:44.958280  670144 system_pods.go:61] "kube-apiserver-addons-052630" [578f26c5-733e-4d3b-85da-ecade8aa52dd] Running
	I0923 12:29:44.958284  670144 system_pods.go:61] "kube-controller-manager-addons-052630" [55212af5-b2df-4621-a846-c8912549238d] Running
	I0923 12:29:44.958288  670144 system_pods.go:61] "kube-ingress-dns-minikube" [2187b5c3-511a-4aab-a372-f66d680bbf18] Running
	I0923 12:29:44.958291  670144 system_pods.go:61] "kube-proxy-vn9km" [0e10d00e-8de3-4f7e-ab59-d0f9e93b2f00] Running
	I0923 12:29:44.958295  670144 system_pods.go:61] "kube-scheduler-addons-052630" [a180218d-c5e9-4947-b527-7f9570b9c578] Running
	I0923 12:29:44.958300  670144 system_pods.go:61] "metrics-server-84c5f94fbc-2rhln" [e7c5ceb3-389e-43ff-b807-718f23f12b0f] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 12:29:44.958304  670144 system_pods.go:61] "nvidia-device-plugin-daemonset-fhnrr" [8455a016-6ce8-40d4-bd64-ec3d2e30f774] Running
	I0923 12:29:44.958310  670144 system_pods.go:61] "registry-66c9cd494c-srklj" [ca56f86a-1049-47d9-b11b-9f492f1f0e5a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 12:29:44.958314  670144 system_pods.go:61] "registry-proxy-xmmdr" [cf74bb33-75e5-4844-a3a8-fc698241ea5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 12:29:44.958320  670144 system_pods.go:61] "snapshot-controller-56fcc65765-76p2p" [20745ac3-21a3-45a6-8861-c0ba3567f38a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 12:29:44.958325  670144 system_pods.go:61] "snapshot-controller-56fcc65765-pzghc" [e4692d57-c84d-4bf1-bace-9d6a5a95d95e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 12:29:44.958331  670144 system_pods.go:61] "storage-provisioner" [3bc488f6-aa39-42bc-a0f5-173b2d7e07cf] Running
	I0923 12:29:44.958338  670144 system_pods.go:74] duration metric: took 9.414655ms to wait for pod list to return data ...
	I0923 12:29:44.958347  670144 default_sa.go:34] waiting for default service account to be created ...
	I0923 12:29:44.961083  670144 default_sa.go:45] found service account: "default"
	I0923 12:29:44.961109  670144 default_sa.go:55] duration metric: took 2.755138ms for default service account to be created ...
	I0923 12:29:44.961119  670144 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 12:29:44.967937  670144 system_pods.go:86] 17 kube-system pods found
	I0923 12:29:44.967979  670144 system_pods.go:89] "coredns-7c65d6cfc9-cvw7x" [3de8bd3c-0baf-459b-94f8-f5d52ef1286d] Running
	I0923 12:29:44.967993  670144 system_pods.go:89] "csi-hostpath-attacher-0" [4c3e1f51-c4eb-4fa0-ab09-335efd2aa843] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 12:29:44.968001  670144 system_pods.go:89] "csi-hostpath-resizer-0" [e4676deb-26a8-4a3c-87ac-a226db6563ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 12:29:44.968012  670144 system_pods.go:89] "csi-hostpathplugin-jd2lw" [feb3c94a-858a-4f61-a148-8b64dcfd0934] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 12:29:44.968018  670144 system_pods.go:89] "etcd-addons-052630" [ecb6248b-7e04-4747-946a-eb8fc976147e] Running
	I0923 12:29:44.968024  670144 system_pods.go:89] "kube-apiserver-addons-052630" [578f26c5-733e-4d3b-85da-ecade8aa52dd] Running
	I0923 12:29:44.968029  670144 system_pods.go:89] "kube-controller-manager-addons-052630" [55212af5-b2df-4621-a846-c8912549238d] Running
	I0923 12:29:44.968037  670144 system_pods.go:89] "kube-ingress-dns-minikube" [2187b5c3-511a-4aab-a372-f66d680bbf18] Running
	I0923 12:29:44.968051  670144 system_pods.go:89] "kube-proxy-vn9km" [0e10d00e-8de3-4f7e-ab59-d0f9e93b2f00] Running
	I0923 12:29:44.968057  670144 system_pods.go:89] "kube-scheduler-addons-052630" [a180218d-c5e9-4947-b527-7f9570b9c578] Running
	I0923 12:29:44.968066  670144 system_pods.go:89] "metrics-server-84c5f94fbc-2rhln" [e7c5ceb3-389e-43ff-b807-718f23f12b0f] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 12:29:44.968073  670144 system_pods.go:89] "nvidia-device-plugin-daemonset-fhnrr" [8455a016-6ce8-40d4-bd64-ec3d2e30f774] Running
	I0923 12:29:44.968088  670144 system_pods.go:89] "registry-66c9cd494c-srklj" [ca56f86a-1049-47d9-b11b-9f492f1f0e5a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 12:29:44.968100  670144 system_pods.go:89] "registry-proxy-xmmdr" [cf74bb33-75e5-4844-a3a8-fc698241ea5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 12:29:44.968112  670144 system_pods.go:89] "snapshot-controller-56fcc65765-76p2p" [20745ac3-21a3-45a6-8861-c0ba3567f38a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 12:29:44.968131  670144 system_pods.go:89] "snapshot-controller-56fcc65765-pzghc" [e4692d57-c84d-4bf1-bace-9d6a5a95d95e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 12:29:44.968136  670144 system_pods.go:89] "storage-provisioner" [3bc488f6-aa39-42bc-a0f5-173b2d7e07cf] Running
	I0923 12:29:44.968149  670144 system_pods.go:126] duration metric: took 7.021444ms to wait for k8s-apps to be running ...
	I0923 12:29:44.968165  670144 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 12:29:44.968233  670144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:29:44.984699  670144 system_svc.go:56] duration metric: took 16.527101ms WaitForService to wait for kubelet
	I0923 12:29:44.984736  670144 kubeadm.go:582] duration metric: took 36.885810437s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:29:44.984757  670144 node_conditions.go:102] verifying NodePressure condition ...
	I0923 12:29:44.987925  670144 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:29:44.987958  670144 node_conditions.go:123] node cpu capacity is 2
	I0923 12:29:44.987971  670144 node_conditions.go:105] duration metric: took 3.209178ms to run NodePressure ...
	I0923 12:29:44.987984  670144 start.go:241] waiting for startup goroutines ...
	I0923 12:29:45.092993  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:45.381916  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:45.382878  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:45.405371  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:45.592889  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:45.882961  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:45.882986  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:45.905772  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:46.094099  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:46.381480  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:46.381480  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:46.405345  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:46.593680  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:46.881522  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:46.881585  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:46.907463  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:47.092649  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:47.381289  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:47.382803  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:47.404633  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:47.593242  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:47.881017  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:47.881741  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:47.905476  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:48.094283  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:48.381287  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:48.381678  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:48.404848  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:48.593290  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:49.182575  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:49.182862  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:49.183278  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:49.183600  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:49.387493  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:49.387949  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:49.409172  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:49.593041  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:49.881864  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:49.882012  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:49.905486  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:50.093223  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:50.381524  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:50.381911  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:50.405382  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:50.593121  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:50.882078  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:50.882130  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:50.904664  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:51.094395  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:51.381785  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:51.382965  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:51.404814  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:51.593466  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:51.881718  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:51.882182  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:51.906271  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:52.093535  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:52.381560  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:52.382447  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:52.483055  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:52.592715  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:52.882614  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:52.882831  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:52.905337  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:53.099377  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:53.382358  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:53.382434  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:53.405014  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:53.593255  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:53.881701  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:53.882109  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:53.905214  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:54.093317  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:54.381400  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:54.381756  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:54.405603  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:54.593298  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:54.881505  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:54.882280  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:54.905352  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:55.096080  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:55.381500  670144 kapi.go:107] duration metric: took 39.004256174s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 12:29:55.382262  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:55.407177  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:55.593060  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:55.881873  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:55.906292  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:56.095168  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:56.467534  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:56.467800  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:56.593413  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:56.881611  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:56.905852  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:57.093199  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:57.380555  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:57.407044  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:57.821632  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:57.881537  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:57.906086  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:58.093251  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:58.381225  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:58.405370  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:58.592999  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:58.882363  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:58.905848  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:59.092799  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:59.381850  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:59.405243  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:59.592647  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:59.883180  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:59.905462  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:00.093783  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:00.381525  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:00.405496  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:00.593067  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:00.882096  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:00.905415  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:01.093248  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:01.381090  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:01.404657  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:01.592915  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:01.881472  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:01.904650  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:02.094989  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:02.381519  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:02.482813  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:02.592969  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:02.881994  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:02.905592  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:03.092833  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:03.382442  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:03.737000  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:03.737731  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:03.881239  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:03.908549  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:04.092952  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:04.382596  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:04.406348  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:04.592523  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:04.882260  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:04.906335  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:05.093281  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:05.381532  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:05.404962  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:05.593867  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:05.881533  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:05.905611  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:06.092910  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:06.382350  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:06.405359  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:06.592970  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:06.881573  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:06.905700  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:07.093261  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:07.383765  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:07.406221  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:07.593359  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:07.881515  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:07.905283  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:08.094381  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:08.436545  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:08.437214  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:08.595352  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:08.881471  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:08.904728  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:09.094082  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:09.382329  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:09.418347  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:09.592417  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:09.882579  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:09.905086  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:10.093585  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:10.381916  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:10.408107  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:10.593205  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:10.881583  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:10.906213  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:11.092377  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:11.381528  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:11.405175  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:11.593188  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:11.881123  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:11.906575  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:12.093361  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:12.381510  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:12.418229  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:12.594390  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:12.883421  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:12.905655  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:13.093231  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:13.380738  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:13.409871  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:13.592706  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:13.881963  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:13.906221  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:14.092914  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:14.382057  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:14.405898  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:14.593405  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:14.883241  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:14.905532  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:15.092900  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:15.381659  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:15.404674  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:15.595837  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:15.884204  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:15.906723  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:16.096714  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:16.398360  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:16.492006  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:16.593666  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:16.886491  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:16.907334  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:17.105994  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:17.383325  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:17.406532  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:17.592593  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:17.881884  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:17.906107  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:18.098950  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:18.382178  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:18.406919  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:18.593795  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:18.881986  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:18.907032  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:19.093203  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:19.385652  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:19.486193  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:19.593670  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:20.158045  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:20.160442  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:20.160600  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:20.381193  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:20.406353  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:20.592767  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:20.881653  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:20.906233  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:21.092756  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:21.381504  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:21.404711  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:21.593682  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:21.882663  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:21.905651  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:22.094019  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:22.381116  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:22.482594  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:22.593429  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:22.882120  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:22.907262  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:23.093012  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:23.381337  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:23.416798  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:23.605942  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:23.883914  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:23.905484  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:24.092422  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:24.382490  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:24.404543  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:24.593615  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:24.882704  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:24.905157  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:25.092234  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:25.381913  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:25.406353  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:25.593550  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:25.881420  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:25.905759  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:26.092760  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:26.382791  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:26.404663  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:26.593511  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:26.881695  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:26.906109  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:27.092908  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:27.381352  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:27.405542  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:27.593292  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:27.881677  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:27.905877  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:28.093483  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:28.381903  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:28.405916  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:28.596909  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:28.883234  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:28.907825  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:29.093630  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:29.384206  670144 kapi.go:107] duration metric: took 1m13.007346283s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 12:30:29.408031  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:29.593154  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:29.905366  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:30.096542  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:30.407476  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:30.593391  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:30.905711  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:31.093234  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:31.406100  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:31.593583  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:31.905683  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:32.093451  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:32.405762  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:32.593457  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:32.906615  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:33.092949  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:33.405990  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:33.593662  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:33.908125  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:34.095552  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:34.410315  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:34.593641  670144 kapi.go:107] duration metric: took 1m15.004433334s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 12:30:34.596145  670144 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-052630 cluster.
	I0923 12:30:34.597867  670144 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 12:30:34.599357  670144 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 12:30:34.905455  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:35.406462  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:35.906240  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:36.408440  670144 kapi.go:107] duration metric: took 1m18.507800959s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 12:30:36.410763  670144 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, default-storageclass, ingress-dns, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0923 12:30:36.412731  670144 addons.go:510] duration metric: took 1m28.313766491s for enable addons: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin default-storageclass ingress-dns inspektor-gadget metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0923 12:30:36.412794  670144 start.go:246] waiting for cluster config update ...
	I0923 12:30:36.412829  670144 start.go:255] writing updated cluster config ...
	I0923 12:30:36.413342  670144 ssh_runner.go:195] Run: rm -f paused
	I0923 12:30:36.467246  670144 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 12:30:36.469473  670144 out.go:177] * Done! kubectl is now configured to use "addons-052630" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 23 12:41:43 addons-052630 crio[664]: time="2024-09-23 12:41:43.508100364Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:2700e6a975e0821a451d1a3a41fc665ed1652d4380515018e498434fe7a5a0ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1727094552247216634,StartedAt:1727094552495820924,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cvw7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3de8bd3c-0baf-459b-94f8-f5d52ef1286d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/3de8bd3c-0baf-459b-94f8-f5d52ef1286d/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/3de8bd3c-0baf-459b-94f8-f5d52ef1286d/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/3de8bd3c-0baf-459b-94f8-f5d52ef1286d/containers/coredns/83eb3839,Readonly:false,SelinuxRelabel:false,Propagation:PRO
PAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/3de8bd3c-0baf-459b-94f8-f5d52ef1286d/volumes/kubernetes.io~projected/kube-api-access-t4pbn,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-7c65d6cfc9-cvw7x_3de8bd3c-0baf-459b-94f8-f5d52ef1286d/coredns/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:982,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=95942913-d1bf-4c6a-b28f-0efb31a8347c name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 23 12:41:43 addons-052630 crio[664]: time="2024-09-23 12:41:43.508642800Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:4f2e68fe054158153cd0c8a69f419c5737179e35fdb015065c2b0c5026242a00,Verbose:false,}" file="otel-collector/interceptors.go:62" id=2b309c5e-1352-4c4e-afc3-a360d58eb83d name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 23 12:41:43 addons-052630 crio[664]: time="2024-09-23 12:41:43.508806334Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:4f2e68fe054158153cd0c8a69f419c5737179e35fdb015065c2b0c5026242a00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1727094549383522402,StartedAt:1727094549504114451,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.31.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vn9km,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e10d00e-8de3-4f7e-ab59-d0f9e93b2f00,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/0e10d00e-8de3-4f7e-ab59-d0f9e93b2f00/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/0e10d00e-8de3-4f7e-ab59-d0f9e93b2f00/containers/kube-proxy/2004894b,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/lib/
kubelet/pods/0e10d00e-8de3-4f7e-ab59-d0f9e93b2f00/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/0e10d00e-8de3-4f7e-ab59-d0f9e93b2f00/volumes/kubernetes.io~projected/kube-api-access-rs2vp,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-vn9km_0e10d00e-8de3-4f7e-ab59-d0f9e93b2f00/kube-proxy/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-colle
ctor/interceptors.go:74" id=2b309c5e-1352-4c4e-afc3-a360d58eb83d name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 23 12:41:43 addons-052630 crio[664]: time="2024-09-23 12:41:43.511501402Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=39744337-1e20-459a-96a6-b41915b33cc3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 12:41:43 addons-052630 crio[664]: time="2024-09-23 12:41:43.512482036Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=557f0521-e2ec-4b39-98a8-226f4e89d790 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 12:41:43 addons-052630 crio[664]: time="2024-09-23 12:41:43.514319225Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727095303514292886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=39744337-1e20-459a-96a6-b41915b33cc3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 12:41:43 addons-052630 crio[664]: time="2024-09-23 12:41:43.514674795Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:2d98809372a261156c26bb6e7875a9195290bc295be13167b14faf4bcfd7ac5a,Verbose:false,}" file="otel-collector/interceptors.go:62" id=d674c15b-6b20-4747-ba36-42ba95d0960c name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 23 12:41:43 addons-052630 crio[664]: time="2024-09-23 12:41:43.514902668Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:2d98809372a261156c26bb6e7875a9195290bc295be13167b14faf4bcfd7ac5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1727094538316422547,StartedAt:1727094538418778528,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.31.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd793e50c81059d44a1e6fde8a448895,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/dd793e50c81059d44a1e6fde8a448895/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/dd793e50c81059d44a1e6fde8a448895/containers/kube-scheduler/d7904527,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-addons-052630_dd793e50c81059d44a1e6fde8a448895/kube-scheduler/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,
CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=d674c15b-6b20-4747-ba36-42ba95d0960c name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 23 12:41:43 addons-052630 crio[664]: time="2024-09-23 12:41:43.515469255Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:137997c74feadea0b206e40066df0bab268bc86a43379e84dcea2cf1d5c37c85,Verbose:false,}" file="otel-collector/interceptors.go:62" id=ae3cab01-954a-4b45-b2f4-306275643a53 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 23 12:41:43 addons-052630 crio[664]: time="2024-09-23 12:41:43.515627530Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:137997c74feadea0b206e40066df0bab268bc86a43379e84dcea2cf1d5c37c85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1727094538271159277,StartedAt:1727094538361328247,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.31.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7efdfb9180b7292c18423e02021138d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/a7efdfb9180b7292c18423e02021138d/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/a7efdfb9180b7292c18423e02021138d/containers/kube-controller-manager/79909073,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMapp
ings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-addons-052630_a7efdfb9180b7292c18423e02021138d/kube-controller-manager/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,Hugepag
eLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=ae3cab01-954a-4b45-b2f4-306275643a53 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 23 12:41:43 addons-052630 crio[664]: time="2024-09-23 12:41:43.516106961Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:84885d234fc5d6c19b12360b7a7ed082cccb20946dcbedee5d7e8756cd36ffb0,Verbose:false,}" file="otel-collector/interceptors.go:62" id=f80930a1-c422-4652-9163-786af632eabe name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 23 12:41:43 addons-052630 crio[664]: time="2024-09-23 12:41:43.516253432Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:84885d234fc5d6c19b12360b7a7ed082cccb20946dcbedee5d7e8756cd36ffb0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1727094538197904437,StartedAt:1727094538347101816,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.15-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1947c799ac122c11eb2c15f2bc9fdc08,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/1947c799ac122c11eb2c15f2bc9fdc08/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/1947c799ac122c11eb2c15f2bc9fdc08/containers/etcd/c4174577,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-addons-0
52630_1947c799ac122c11eb2c15f2bc9fdc08/etcd/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=f80930a1-c422-4652-9163-786af632eabe name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 23 12:41:43 addons-052630 crio[664]: time="2024-09-23 12:41:43.516643743Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:b706da2e61377c7ed468c79a4331b242c0011823c88614c8bc039cc285976d81,Verbose:false,}" file="otel-collector/interceptors.go:62" id=0afc8d89-7736-4994-bfbc-dc9cb979e3cb name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 23 12:41:43 addons-052630 crio[664]: time="2024-09-23 12:41:43.516770109Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:b706da2e61377c7ed468c79a4331b242c0011823c88614c8bc039cc285976d81,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1727094538194941521,StartedAt:1727094538264673519,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.31.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c71f38e20d8cf8d860ac88cdd9241f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/66c71f38e20d8cf8d860ac88cdd9241f/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/66c71f38e20d8cf8d860ac88cdd9241f/containers/kube-apiserver/b4eceef8,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/
var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-addons-052630_66c71f38e20d8cf8d860ac88cdd9241f/kube-apiserver/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=0afc8d89-7736-4994-bfbc-dc9cb979e3cb name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 23 12:41:43 addons-052630 crio[664]: time="2024-09-23 12:41:43.517640156Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa1c1a73-76da-400f-b497-4b626b329f90 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 12:41:43 addons-052630 crio[664]: time="2024-09-23 12:41:43.517750252Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa1c1a73-76da-400f-b497-4b626b329f90 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 12:41:43 addons-052630 crio[664]: time="2024-09-23 12:41:43.519153663Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a089a3781add8595157385aad0947e2ea4b8c1571897261093173772dbd4029e,PodSandboxId:f8ba55a3e9041e3657843b6ffc7ffd919779e5373e2065f582f9201f5dbf0774,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727095295770795736,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-qzcw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0b254feb-b4af-4f12-9e52-a816f5d00bac,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd658c0598e6c49415ca300ec19c8efc652697d90ca659d5332bd0cc8f9da0ce,PodSandboxId:e9d41568c174048781bd2e547ce07b9b7f13bd648556c363403a06a7374416ad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727095155775653048,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 487480e4-f024-4e3c-9c18-a9aabd6129fb,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c427e0695fa7dfe118179b0685857c7d96bbed4dca69a80b42715eb28daf3f3,PodSandboxId:e0f536b5e92b1765bbec31f330b1cbfc55061818c897748a2f248d41719fbcd7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727094633948657283,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-gzksd,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 1b75c160-3198-402b-b135-861e77ac4482,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e27a8d8fca2436dbc6c6a61141fca32d7ee57899f062b30fe7985c09af2497d,PodSandboxId:ed4a201ebc8ba0f30f371834b83f2c66afb4f5882ee634a4917eace4fc0240ca,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727094612554187733,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-rt72w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: be51de49-d024-4957-aa1c-cca98b0f88cd,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3acec0e873b6d270ce4f017668c10ddb9b853ceecdb55fa8e1c753abc4b762d,PodSandboxId:1c884f88ba6db8f1319071f0e2d608c1dfa5e0c14427ad8c874c2031e7a816cb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727094612406967962,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-d2m8p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ea774ad2-860f-4e87-b48c-369cdc2dd298,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50f1ae050ce475e5a505a980ea72122b45036c60002591f0381f922671fc411a,PodSandboxId:17d85166b8277c2a9faa6b4607652c23931a05692eb0e979f495fa4c4552c2f9,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:172
7094606636049364,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-snqv8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 43c09017-cfad-4a08-b73c-bfba508afe73,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c2b9200f7a37ef1e8ff5b91ed0bd719859f18fd8e04d31045255bb46a563b5,PodSandboxId:dfa6385e052b942da39e7f1efb907744acba0e7c89c40514021b4c90d419d7bc,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb
19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727094558710109886,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2rhln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7c5ceb3-389e-43ff-b807-718f23f12b0f,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58bbd55bde08fee5d7aeb446829fa511ea633c6594a8f94dbc19f40954380b59,PodSandboxId:7fc2b63648c6ce7f74862f514ca11336f589ba36807a84f82b5fe966e703bba1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727094554932322734,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc488f6-aa39-42bc-a0f5-173b2d7e07cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2700e6a975e0821a451d1a3a41fc665ed1652d4380515018e498434fe7a5a0ff,PodSandboxId:f5725c70d12571297f1fbc08fcf7c6634ea79b711270178cb2861d7a021f4a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727094551725672407,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cvw7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3de8bd3c-0baf-459b-94f8-f5d52ef1286d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f2e68fe054158153cd0c8a69f419c5737179e35fdb015065c2b0c5026242a00,PodSandboxId:d54027fa53db00e856f587b7398dfbee79868ce10d8c9bc030a174a63
5717867,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727094549016200714,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vn9km,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e10d00e-8de3-4f7e-ab59-d0f9e93b2f00,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d98809372a261156c26bb6e7875a9195290bc295be13167b14faf4bcfd7ac5a,PodSandboxId:1a45969da935e2684242fa5b07b35eaa8001d3fe9d4867c4f31f2152672a0eea,Metadata:&ContainerMetada
ta{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727094538170986390,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd793e50c81059d44a1e6fde8a448895,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:137997c74feadea0b206e40066df0bab268bc86a43379e84dcea2cf1d5c37c85,PodSandboxId:8618182b0365790203283b2a6cd2de064a98724d33806cc9f4eedfc629ad8516,Metadata:&ContainerMetadata{Name:kube-cont
roller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727094538165838825,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7efdfb9180b7292c18423e02021138d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84885d234fc5d6c19b12360b7a7ed082cccb20946dcbedee5d7e8756cd36ffb0,PodSandboxId:2f48abf774e208d8f1e5e0d05f63bfa69400ab9e4bb0147be37e97f07eed1343,Metadata:&ContainerMetadata{Name
:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727094538113594059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1947c799ac122c11eb2c15f2bc9fdc08,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b706da2e61377c7ed468c79a4331b242c0011823c88614c8bc039cc285976d81,PodSandboxId:a16e26d2dc6966551d559c1a5d3db6a99724044ad4418a767d04c065c600a61d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Im
age:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727094538130237781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c71f38e20d8cf8d860ac88cdd9241f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fa1c1a73-76da-400f-b497-4b626b329f90 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 12:41:43 addons-052630 crio[664]: time="2024-09-23 12:41:43.520831466Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727095303520808639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=557f0521-e2ec-4b39-98a8-226f4e89d790 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 12:41:43 addons-052630 crio[664]: time="2024-09-23 12:41:43.560721082Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5c0d7cd1-7f84-4192-81af-6c4697555d24 name=/runtime.v1.RuntimeService/Version
	Sep 23 12:41:43 addons-052630 crio[664]: time="2024-09-23 12:41:43.560816813Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5c0d7cd1-7f84-4192-81af-6c4697555d24 name=/runtime.v1.RuntimeService/Version
	Sep 23 12:41:43 addons-052630 crio[664]: time="2024-09-23 12:41:43.561846272Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=23bfe9a4-7269-4a3c-9926-c5a26d49b596 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 12:41:43 addons-052630 crio[664]: time="2024-09-23 12:41:43.562947549Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727095303562916477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23bfe9a4-7269-4a3c-9926-c5a26d49b596 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 12:41:43 addons-052630 crio[664]: time="2024-09-23 12:41:43.563820458Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bcbd3dc8-9147-4e99-8d46-364004f93dea name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 12:41:43 addons-052630 crio[664]: time="2024-09-23 12:41:43.563893811Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bcbd3dc8-9147-4e99-8d46-364004f93dea name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 12:41:43 addons-052630 crio[664]: time="2024-09-23 12:41:43.564317409Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a089a3781add8595157385aad0947e2ea4b8c1571897261093173772dbd4029e,PodSandboxId:f8ba55a3e9041e3657843b6ffc7ffd919779e5373e2065f582f9201f5dbf0774,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727095295770795736,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-qzcw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0b254feb-b4af-4f12-9e52-a816f5d00bac,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd658c0598e6c49415ca300ec19c8efc652697d90ca659d5332bd0cc8f9da0ce,PodSandboxId:e9d41568c174048781bd2e547ce07b9b7f13bd648556c363403a06a7374416ad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727095155775653048,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 487480e4-f024-4e3c-9c18-a9aabd6129fb,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c427e0695fa7dfe118179b0685857c7d96bbed4dca69a80b42715eb28daf3f3,PodSandboxId:e0f536b5e92b1765bbec31f330b1cbfc55061818c897748a2f248d41719fbcd7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727094633948657283,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-gzksd,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 1b75c160-3198-402b-b135-861e77ac4482,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e27a8d8fca2436dbc6c6a61141fca32d7ee57899f062b30fe7985c09af2497d,PodSandboxId:ed4a201ebc8ba0f30f371834b83f2c66afb4f5882ee634a4917eace4fc0240ca,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727094612554187733,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-rt72w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: be51de49-d024-4957-aa1c-cca98b0f88cd,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3acec0e873b6d270ce4f017668c10ddb9b853ceecdb55fa8e1c753abc4b762d,PodSandboxId:1c884f88ba6db8f1319071f0e2d608c1dfa5e0c14427ad8c874c2031e7a816cb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727094612406967962,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-d2m8p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ea774ad2-860f-4e87-b48c-369cdc2dd298,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50f1ae050ce475e5a505a980ea72122b45036c60002591f0381f922671fc411a,PodSandboxId:17d85166b8277c2a9faa6b4607652c23931a05692eb0e979f495fa4c4552c2f9,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:172
7094606636049364,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-snqv8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 43c09017-cfad-4a08-b73c-bfba508afe73,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c2b9200f7a37ef1e8ff5b91ed0bd719859f18fd8e04d31045255bb46a563b5,PodSandboxId:dfa6385e052b942da39e7f1efb907744acba0e7c89c40514021b4c90d419d7bc,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb
19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727094558710109886,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2rhln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7c5ceb3-389e-43ff-b807-718f23f12b0f,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58bbd55bde08fee5d7aeb446829fa511ea633c6594a8f94dbc19f40954380b59,PodSandboxId:7fc2b63648c6ce7f74862f514ca11336f589ba36807a84f82b5fe966e703bba1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727094554932322734,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc488f6-aa39-42bc-a0f5-173b2d7e07cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2700e6a975e0821a451d1a3a41fc665ed1652d4380515018e498434fe7a5a0ff,PodSandboxId:f5725c70d12571297f1fbc08fcf7c6634ea79b711270178cb2861d7a021f4a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727094551725672407,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cvw7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3de8bd3c-0baf-459b-94f8-f5d52ef1286d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f2e68fe054158153cd0c8a69f419c5737179e35fdb015065c2b0c5026242a00,PodSandboxId:d54027fa53db00e856f587b7398dfbee79868ce10d8c9bc030a174a63
5717867,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727094549016200714,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vn9km,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e10d00e-8de3-4f7e-ab59-d0f9e93b2f00,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d98809372a261156c26bb6e7875a9195290bc295be13167b14faf4bcfd7ac5a,PodSandboxId:1a45969da935e2684242fa5b07b35eaa8001d3fe9d4867c4f31f2152672a0eea,Metadata:&ContainerMetada
ta{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727094538170986390,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd793e50c81059d44a1e6fde8a448895,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:137997c74feadea0b206e40066df0bab268bc86a43379e84dcea2cf1d5c37c85,PodSandboxId:8618182b0365790203283b2a6cd2de064a98724d33806cc9f4eedfc629ad8516,Metadata:&ContainerMetadata{Name:kube-cont
roller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727094538165838825,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7efdfb9180b7292c18423e02021138d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84885d234fc5d6c19b12360b7a7ed082cccb20946dcbedee5d7e8756cd36ffb0,PodSandboxId:2f48abf774e208d8f1e5e0d05f63bfa69400ab9e4bb0147be37e97f07eed1343,Metadata:&ContainerMetadata{Name
:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727094538113594059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1947c799ac122c11eb2c15f2bc9fdc08,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b706da2e61377c7ed468c79a4331b242c0011823c88614c8bc039cc285976d81,PodSandboxId:a16e26d2dc6966551d559c1a5d3db6a99724044ad4418a767d04c065c600a61d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Im
age:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727094538130237781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c71f38e20d8cf8d860ac88cdd9241f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bcbd3dc8-9147-4e99-8d46-364004f93dea name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a089a3781add8       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   f8ba55a3e9041       hello-world-app-55bf9c44b4-qzcw6
	dd658c0598e6c       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              2 minutes ago       Running             nginx                     0                   e9d41568c1740       nginx
	4c427e0695fa7       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 11 minutes ago      Running             gcp-auth                  0                   e0f536b5e92b1       gcp-auth-89d5ffd79-gzksd
	9e27a8d8fca24       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   11 minutes ago      Exited              patch                     0                   ed4a201ebc8ba       ingress-nginx-admission-patch-rt72w
	b3acec0e873b6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   11 minutes ago      Exited              create                    0                   1c884f88ba6db       ingress-nginx-admission-create-d2m8p
	50f1ae050ce47       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             11 minutes ago      Running             local-path-provisioner    0                   17d85166b8277       local-path-provisioner-86d989889c-snqv8
	54c2b9200f7a3       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        12 minutes ago      Running             metrics-server            0                   dfa6385e052b9       metrics-server-84c5f94fbc-2rhln
	58bbd55bde08f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             12 minutes ago      Running             storage-provisioner       0                   7fc2b63648c6c       storage-provisioner
	2700e6a975e08       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             12 minutes ago      Running             coredns                   0                   f5725c70d1257       coredns-7c65d6cfc9-cvw7x
	4f2e68fe05415       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             12 minutes ago      Running             kube-proxy                0                   d54027fa53db0       kube-proxy-vn9km
	2d98809372a26       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             12 minutes ago      Running             kube-scheduler            0                   1a45969da935e       kube-scheduler-addons-052630
	137997c74fead       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             12 minutes ago      Running             kube-controller-manager   0                   8618182b03657       kube-controller-manager-addons-052630
	b706da2e61377       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             12 minutes ago      Running             kube-apiserver            0                   a16e26d2dc696       kube-apiserver-addons-052630
	84885d234fc5d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             12 minutes ago      Running             etcd                      0                   2f48abf774e20       etcd-addons-052630
	
	
	==> coredns [2700e6a975e0821a451d1a3a41fc665ed1652d4380515018e498434fe7a5a0ff] <==
	[INFO] 10.244.0.7:59787 - 46467 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000082041s
	[INFO] 10.244.0.21:50719 - 3578 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000678697s
	[INFO] 10.244.0.21:59846 - 36057 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000185909s
	[INFO] 10.244.0.21:51800 - 41027 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000131443s
	[INFO] 10.244.0.21:60988 - 60393 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000092533s
	[INFO] 10.244.0.21:37198 - 50317 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000088047s
	[INFO] 10.244.0.21:53871 - 9639 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000076299s
	[INFO] 10.244.0.21:35205 - 14039 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.004685857s
	[INFO] 10.244.0.21:34331 - 9494 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00457672s
	[INFO] 10.244.0.7:43442 - 53421 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000388692s
	[INFO] 10.244.0.7:43442 - 62888 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000079319s
	[INFO] 10.244.0.7:55893 - 18422 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000147084s
	[INFO] 10.244.0.7:55893 - 9973 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000095576s
	[INFO] 10.244.0.7:47983 - 23764 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000188893s
	[INFO] 10.244.0.7:47983 - 4566 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000115139s
	[INFO] 10.244.0.7:50253 - 35636 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000151794s
	[INFO] 10.244.0.7:50253 - 39730 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000122834s
	[INFO] 10.244.0.7:52374 - 7303 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000165376s
	[INFO] 10.244.0.7:52374 - 65467 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000108039s
	[INFO] 10.244.0.7:38944 - 938 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000084751s
	[INFO] 10.244.0.7:38944 - 32437 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000074543s
	[INFO] 10.244.0.7:35936 - 54263 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000055079s
	[INFO] 10.244.0.7:35936 - 63221 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000100045s
	[INFO] 10.244.0.7:58342 - 30223 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00010406s
	[INFO] 10.244.0.7:58342 - 58610 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00006497s
	
	
	==> describe nodes <==
	Name:               addons-052630
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-052630
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=addons-052630
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T12_29_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-052630
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:29:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-052630
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 12:41:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 12:40:07 +0000   Mon, 23 Sep 2024 12:28:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 12:40:07 +0000   Mon, 23 Sep 2024 12:28:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 12:40:07 +0000   Mon, 23 Sep 2024 12:28:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 12:40:07 +0000   Mon, 23 Sep 2024 12:29:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    addons-052630
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 46d8dccd290a43399ed351791d0287b7
	  System UUID:                46d8dccd-290a-4339-9ed3-51791d0287b7
	  Boot ID:                    aef77f72-28ae-4358-8b71-243c7f96a73e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-qzcw6           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  gcp-auth                    gcp-auth-89d5ffd79-gzksd                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-cvw7x                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-052630                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-052630               250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-052630      200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-vn9km                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-052630               100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-2rhln            100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         12m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-snqv8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-052630 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-052630 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-052630 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-052630 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-052630 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-052630 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-052630 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-052630 event: Registered Node addons-052630 in Controller
	
	
	==> dmesg <==
	[  +5.024950] kauditd_printk_skb: 96 callbacks suppressed
	[  +9.303213] kauditd_printk_skb: 112 callbacks suppressed
	[ +30.702636] kauditd_printk_skb: 2 callbacks suppressed
	[Sep23 12:30] kauditd_printk_skb: 27 callbacks suppressed
	[  +6.339998] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.675662] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.264623] kauditd_printk_skb: 74 callbacks suppressed
	[  +7.313349] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.509035] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.134771] kauditd_printk_skb: 52 callbacks suppressed
	[Sep23 12:31] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 12:33] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 12:36] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 12:38] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.581855] kauditd_printk_skb: 6 callbacks suppressed
	[Sep23 12:39] kauditd_printk_skb: 26 callbacks suppressed
	[ +14.124700] kauditd_printk_skb: 14 callbacks suppressed
	[  +8.773860] kauditd_printk_skb: 14 callbacks suppressed
	[  +8.300408] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.891663] kauditd_printk_skb: 6 callbacks suppressed
	[ +11.246442] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.325312] kauditd_printk_skb: 41 callbacks suppressed
	[Sep23 12:40] kauditd_printk_skb: 21 callbacks suppressed
	[Sep23 12:41] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.744867] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [84885d234fc5d6c19b12360b7a7ed082cccb20946dcbedee5d7e8756cd36ffb0] <==
	{"level":"warn","ts":"2024-09-23T12:38:45.585401Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.347829ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T12:38:45.585825Z","caller":"traceutil/trace.go:171","msg":"trace[689683591] linearizableReadLoop","detail":"{readStateIndex:2107; appliedIndex:2106; }","duration":"240.076181ms","start":"2024-09-23T12:38:45.345716Z","end":"2024-09-23T12:38:45.585792Z","steps":["trace[689683591] 'read index received'  (duration: 239.056299ms)","trace[689683591] 'applied index is now lower than readState.Index'  (duration: 1.019485ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T12:38:45.585991Z","caller":"traceutil/trace.go:171","msg":"trace[1433974130] transaction","detail":"{read_only:false; response_revision:1969; number_of_response:1; }","duration":"412.405735ms","start":"2024-09-23T12:38:45.173572Z","end":"2024-09-23T12:38:45.585978Z","steps":["trace[1433974130] 'process raft request'  (duration: 411.242557ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:38:45.586153Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T12:38:45.173553Z","time spent":"412.503245ms","remote":"127.0.0.1:41198","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":540,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-052630\" mod_revision:1922 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-052630\" value_size:486 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-052630\" > >"}
	{"level":"warn","ts":"2024-09-23T12:38:45.586522Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.799311ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T12:38:45.586554Z","caller":"traceutil/trace.go:171","msg":"trace[1061360463] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1969; }","duration":"240.827325ms","start":"2024-09-23T12:38:45.345712Z","end":"2024-09-23T12:38:45.586540Z","steps":["trace[1061360463] 'agreement among raft nodes before linearized reading'  (duration: 240.547514ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:38:45.586713Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.275793ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T12:38:45.586729Z","caller":"traceutil/trace.go:171","msg":"trace[1600622772] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1969; }","duration":"181.293593ms","start":"2024-09-23T12:38:45.405431Z","end":"2024-09-23T12:38:45.586724Z","steps":["trace[1600622772] 'agreement among raft nodes before linearized reading'  (duration: 181.261953ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:38:45.586889Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.90923ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T12:38:45.586903Z","caller":"traceutil/trace.go:171","msg":"trace[43504617] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1969; }","duration":"108.925213ms","start":"2024-09-23T12:38:45.477974Z","end":"2024-09-23T12:38:45.586899Z","steps":["trace[43504617] 'agreement among raft nodes before linearized reading'  (duration: 108.900464ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:38:45.586971Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.015116ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T12:38:45.586992Z","caller":"traceutil/trace.go:171","msg":"trace[1522914426] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1969; }","duration":"109.03651ms","start":"2024-09-23T12:38:45.477951Z","end":"2024-09-23T12:38:45.586988Z","steps":["trace[1522914426] 'agreement among raft nodes before linearized reading'  (duration: 109.008631ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:38:45.587155Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.402947ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-09-23T12:38:45.587172Z","caller":"traceutil/trace.go:171","msg":"trace[1003053304] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1969; }","duration":"122.420273ms","start":"2024-09-23T12:38:45.464747Z","end":"2024-09-23T12:38:45.587167Z","steps":["trace[1003053304] 'agreement among raft nodes before linearized reading'  (duration: 122.358904ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T12:38:45.588792Z","caller":"traceutil/trace.go:171","msg":"trace[1914231593] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1968; }","duration":"223.550909ms","start":"2024-09-23T12:38:45.361971Z","end":"2024-09-23T12:38:45.585522Z","steps":["trace[1914231593] 'range keys from in-memory index tree'  (duration: 223.329199ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T12:38:59.411855Z","caller":"traceutil/trace.go:171","msg":"trace[1835850910] transaction","detail":"{read_only:false; response_revision:2049; number_of_response:1; }","duration":"277.964156ms","start":"2024-09-23T12:38:59.133873Z","end":"2024-09-23T12:38:59.411837Z","steps":["trace[1835850910] 'process raft request'  (duration: 277.797273ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T12:38:59.412118Z","caller":"traceutil/trace.go:171","msg":"trace[494165466] linearizableReadLoop","detail":"{readStateIndex:2191; appliedIndex:2191; }","duration":"230.364595ms","start":"2024-09-23T12:38:59.181745Z","end":"2024-09-23T12:38:59.412110Z","steps":["trace[494165466] 'read index received'  (duration: 230.361284ms)","trace[494165466] 'applied index is now lower than readState.Index'  (duration: 2.661µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T12:38:59.412326Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.027808ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-23T12:38:59.412352Z","caller":"traceutil/trace.go:171","msg":"trace[1017305449] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:2049; }","duration":"166.068608ms","start":"2024-09-23T12:38:59.246275Z","end":"2024-09-23T12:38:59.412343Z","steps":["trace[1017305449] 'agreement among raft nodes before linearized reading'  (duration: 165.97691ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:38:59.412565Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"230.833337ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1114"}
	{"level":"info","ts":"2024-09-23T12:38:59.412600Z","caller":"traceutil/trace.go:171","msg":"trace[1433149078] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2049; }","duration":"230.871055ms","start":"2024-09-23T12:38:59.181723Z","end":"2024-09-23T12:38:59.412594Z","steps":["trace[1433149078] 'agreement among raft nodes before linearized reading'  (duration: 230.777381ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T12:38:59.490314Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1537}
	{"level":"info","ts":"2024-09-23T12:38:59.546892Z","caller":"traceutil/trace.go:171","msg":"trace[1169948736] transaction","detail":"{read_only:false; response_revision:2050; number_of_response:1; }","duration":"130.033368ms","start":"2024-09-23T12:38:59.416838Z","end":"2024-09-23T12:38:59.546872Z","steps":["trace[1169948736] 'process raft request'  (duration: 74.021052ms)","trace[1169948736] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; req_size:1095; } (duration: 55.627555ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T12:38:59.562704Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1537,"took":"71.895193ms","hash":851007697,"current-db-size-bytes":6762496,"current-db-size":"6.8 MB","current-db-size-in-use-bytes":3760128,"current-db-size-in-use":"3.8 MB"}
	{"level":"info","ts":"2024-09-23T12:38:59.562759Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":851007697,"revision":1537,"compact-revision":-1}
	
	
	==> gcp-auth [4c427e0695fa7dfe118179b0685857c7d96bbed4dca69a80b42715eb28daf3f3] <==
	2024/09/23 12:30:36 Ready to write response ...
	2024/09/23 12:30:36 Ready to marshal response ...
	2024/09/23 12:30:36 Ready to write response ...
	2024/09/23 12:38:40 Ready to marshal response ...
	2024/09/23 12:38:40 Ready to write response ...
	2024/09/23 12:38:40 Ready to marshal response ...
	2024/09/23 12:38:40 Ready to write response ...
	2024/09/23 12:38:40 Ready to marshal response ...
	2024/09/23 12:38:40 Ready to write response ...
	2024/09/23 12:38:51 Ready to marshal response ...
	2024/09/23 12:38:51 Ready to write response ...
	2024/09/23 12:38:54 Ready to marshal response ...
	2024/09/23 12:38:54 Ready to write response ...
	2024/09/23 12:39:10 Ready to marshal response ...
	2024/09/23 12:39:10 Ready to write response ...
	2024/09/23 12:39:17 Ready to marshal response ...
	2024/09/23 12:39:17 Ready to write response ...
	2024/09/23 12:39:50 Ready to marshal response ...
	2024/09/23 12:39:50 Ready to write response ...
	2024/09/23 12:39:50 Ready to marshal response ...
	2024/09/23 12:39:50 Ready to write response ...
	2024/09/23 12:40:02 Ready to marshal response ...
	2024/09/23 12:40:02 Ready to write response ...
	2024/09/23 12:41:32 Ready to marshal response ...
	2024/09/23 12:41:32 Ready to write response ...
	
	
	==> kernel <==
	 12:41:43 up 13 min,  0 users,  load average: 0.34, 0.62, 0.53
	Linux addons-052630 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b706da2e61377c7ed468c79a4331b242c0011823c88614c8bc039cc285976d81] <==
	E0923 12:30:23.508414       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.53.17:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.53.17:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.53.17:443: connect: connection refused" logger="UnhandledError"
	I0923 12:30:23.642288       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0923 12:38:40.310945       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.127.218"}
	I0923 12:39:05.053206       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0923 12:39:06.091724       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0923 12:39:07.866473       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0923 12:39:10.766646       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0923 12:39:10.966355       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.172.184"}
	I0923 12:39:32.696168       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 12:39:32.696258       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 12:39:32.715555       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 12:39:32.715618       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 12:39:32.748060       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 12:39:32.748123       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 12:39:32.774215       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 12:39:32.775062       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 12:39:32.821384       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 12:39:32.821480       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0923 12:39:33.774424       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0923 12:39:33.821825       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0923 12:39:33.904647       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0923 12:41:32.916892       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.99.88"}
	E0923 12:41:35.730354       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0923 12:41:38.452991       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0923 12:41:38.458981       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [137997c74feadea0b206e40066df0bab268bc86a43379e84dcea2cf1d5c37c85] <==
	E0923 12:40:13.943132       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:40:14.477785       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:40:14.477862       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:40:21.268411       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:40:21.268515       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:40:32.724395       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:40:32.724450       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:40:47.530648       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:40:47.530698       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:41:02.046527       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:41:02.046611       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:41:11.965662       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:41:11.965900       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:41:13.180691       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:41:13.180806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 12:41:32.759673       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="70.456395ms"
	I0923 12:41:32.773122       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="13.388814ms"
	I0923 12:41:32.773213       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="44.942µs"
	I0923 12:41:35.582603       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0923 12:41:35.587399       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="7.128µs"
	I0923 12:41:35.598315       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0923 12:41:36.886085       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="13.1284ms"
	I0923 12:41:36.886306       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="52.185µs"
	W0923 12:41:38.425825       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:41:38.425877       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [4f2e68fe054158153cd0c8a69f419c5737179e35fdb015065c2b0c5026242a00] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 12:29:09.744228       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 12:29:09.770791       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.225"]
	E0923 12:29:09.770866       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 12:29:09.869461       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 12:29:09.869490       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 12:29:09.869514       1 server_linux.go:169] "Using iptables Proxier"
	I0923 12:29:09.873228       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 12:29:09.873652       1 server.go:483] "Version info" version="v1.31.1"
	I0923 12:29:09.873664       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 12:29:09.875209       1 config.go:199] "Starting service config controller"
	I0923 12:29:09.875235       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 12:29:09.875268       1 config.go:105] "Starting endpoint slice config controller"
	I0923 12:29:09.875271       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 12:29:09.875715       1 config.go:328] "Starting node config controller"
	I0923 12:29:09.875721       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 12:29:09.975594       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 12:29:09.976446       1 shared_informer.go:320] Caches are synced for node config
	I0923 12:29:09.976502       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [2d98809372a261156c26bb6e7875a9195290bc295be13167b14faf4bcfd7ac5a] <==
	W0923 12:29:00.681864       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 12:29:00.681896       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:00.681942       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 12:29:00.681966       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:00.681871       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 12:29:00.682069       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:00.682524       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 12:29:00.682555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:01.521067       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 12:29:01.521115       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:01.593793       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 12:29:01.593842       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:01.675102       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 12:29:01.675475       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:01.701107       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 12:29:01.701156       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:01.718193       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 12:29:01.718242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:01.750179       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 12:29:01.750230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:01.832371       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 12:29:01.832582       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:01.940561       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 12:29:01.940868       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0923 12:29:04.675339       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 12:41:33 addons-052630 kubelet[1207]: I0923 12:41:33.836601    1207 scope.go:117] "RemoveContainer" containerID="34581f4844950b97c13f86aaaeaa7e10c5234c24af43cc30c33ba88e6862327a"
	Sep 23 12:41:33 addons-052630 kubelet[1207]: I0923 12:41:33.859419    1207 scope.go:117] "RemoveContainer" containerID="34581f4844950b97c13f86aaaeaa7e10c5234c24af43cc30c33ba88e6862327a"
	Sep 23 12:41:33 addons-052630 kubelet[1207]: E0923 12:41:33.859930    1207 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34581f4844950b97c13f86aaaeaa7e10c5234c24af43cc30c33ba88e6862327a\": container with ID starting with 34581f4844950b97c13f86aaaeaa7e10c5234c24af43cc30c33ba88e6862327a not found: ID does not exist" containerID="34581f4844950b97c13f86aaaeaa7e10c5234c24af43cc30c33ba88e6862327a"
	Sep 23 12:41:33 addons-052630 kubelet[1207]: I0923 12:41:33.859976    1207 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34581f4844950b97c13f86aaaeaa7e10c5234c24af43cc30c33ba88e6862327a"} err="failed to get container status \"34581f4844950b97c13f86aaaeaa7e10c5234c24af43cc30c33ba88e6862327a\": rpc error: code = NotFound desc = could not find container \"34581f4844950b97c13f86aaaeaa7e10c5234c24af43cc30c33ba88e6862327a\": container with ID starting with 34581f4844950b97c13f86aaaeaa7e10c5234c24af43cc30c33ba88e6862327a not found: ID does not exist"
	Sep 23 12:41:33 addons-052630 kubelet[1207]: I0923 12:41:33.918527    1207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzsfq\" (UniqueName: \"kubernetes.io/projected/2187b5c3-511a-4aab-a372-f66d680bbf18-kube-api-access-pzsfq\") pod \"2187b5c3-511a-4aab-a372-f66d680bbf18\" (UID: \"2187b5c3-511a-4aab-a372-f66d680bbf18\") "
	Sep 23 12:41:33 addons-052630 kubelet[1207]: I0923 12:41:33.920853    1207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2187b5c3-511a-4aab-a372-f66d680bbf18-kube-api-access-pzsfq" (OuterVolumeSpecName: "kube-api-access-pzsfq") pod "2187b5c3-511a-4aab-a372-f66d680bbf18" (UID: "2187b5c3-511a-4aab-a372-f66d680bbf18"). InnerVolumeSpecName "kube-api-access-pzsfq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 12:41:34 addons-052630 kubelet[1207]: I0923 12:41:34.019668    1207 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pzsfq\" (UniqueName: \"kubernetes.io/projected/2187b5c3-511a-4aab-a372-f66d680bbf18-kube-api-access-pzsfq\") on node \"addons-052630\" DevicePath \"\""
	Sep 23 12:41:35 addons-052630 kubelet[1207]: I0923 12:41:35.126503    1207 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2187b5c3-511a-4aab-a372-f66d680bbf18" path="/var/lib/kubelet/pods/2187b5c3-511a-4aab-a372-f66d680bbf18/volumes"
	Sep 23 12:41:37 addons-052630 kubelet[1207]: I0923 12:41:37.125683    1207 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be51de49-d024-4957-aa1c-cca98b0f88cd" path="/var/lib/kubelet/pods/be51de49-d024-4957-aa1c-cca98b0f88cd/volumes"
	Sep 23 12:41:37 addons-052630 kubelet[1207]: I0923 12:41:37.126263    1207 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea774ad2-860f-4e87-b48c-369cdc2dd298" path="/var/lib/kubelet/pods/ea774ad2-860f-4e87-b48c-369cdc2dd298/volumes"
	Sep 23 12:41:38 addons-052630 kubelet[1207]: I0923 12:41:38.854600    1207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6d098fb8-3ff9-4429-a01a-80cb0eabbfce-webhook-cert\") pod \"6d098fb8-3ff9-4429-a01a-80cb0eabbfce\" (UID: \"6d098fb8-3ff9-4429-a01a-80cb0eabbfce\") "
	Sep 23 12:41:38 addons-052630 kubelet[1207]: I0923 12:41:38.854652    1207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5k98\" (UniqueName: \"kubernetes.io/projected/6d098fb8-3ff9-4429-a01a-80cb0eabbfce-kube-api-access-t5k98\") pod \"6d098fb8-3ff9-4429-a01a-80cb0eabbfce\" (UID: \"6d098fb8-3ff9-4429-a01a-80cb0eabbfce\") "
	Sep 23 12:41:38 addons-052630 kubelet[1207]: I0923 12:41:38.856693    1207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d098fb8-3ff9-4429-a01a-80cb0eabbfce-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "6d098fb8-3ff9-4429-a01a-80cb0eabbfce" (UID: "6d098fb8-3ff9-4429-a01a-80cb0eabbfce"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 23 12:41:38 addons-052630 kubelet[1207]: I0923 12:41:38.857583    1207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d098fb8-3ff9-4429-a01a-80cb0eabbfce-kube-api-access-t5k98" (OuterVolumeSpecName: "kube-api-access-t5k98") pod "6d098fb8-3ff9-4429-a01a-80cb0eabbfce" (UID: "6d098fb8-3ff9-4429-a01a-80cb0eabbfce"). InnerVolumeSpecName "kube-api-access-t5k98". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 12:41:38 addons-052630 kubelet[1207]: I0923 12:41:38.871933    1207 scope.go:117] "RemoveContainer" containerID="df9183c228f7fc4dfd7472a4d3a1ca2afa4a41ea61b590a672d768ae47ee6707"
	Sep 23 12:41:38 addons-052630 kubelet[1207]: I0923 12:41:38.892251    1207 scope.go:117] "RemoveContainer" containerID="df9183c228f7fc4dfd7472a4d3a1ca2afa4a41ea61b590a672d768ae47ee6707"
	Sep 23 12:41:38 addons-052630 kubelet[1207]: E0923 12:41:38.892715    1207 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df9183c228f7fc4dfd7472a4d3a1ca2afa4a41ea61b590a672d768ae47ee6707\": container with ID starting with df9183c228f7fc4dfd7472a4d3a1ca2afa4a41ea61b590a672d768ae47ee6707 not found: ID does not exist" containerID="df9183c228f7fc4dfd7472a4d3a1ca2afa4a41ea61b590a672d768ae47ee6707"
	Sep 23 12:41:38 addons-052630 kubelet[1207]: I0923 12:41:38.892743    1207 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9183c228f7fc4dfd7472a4d3a1ca2afa4a41ea61b590a672d768ae47ee6707"} err="failed to get container status \"df9183c228f7fc4dfd7472a4d3a1ca2afa4a41ea61b590a672d768ae47ee6707\": rpc error: code = NotFound desc = could not find container \"df9183c228f7fc4dfd7472a4d3a1ca2afa4a41ea61b590a672d768ae47ee6707\": container with ID starting with df9183c228f7fc4dfd7472a4d3a1ca2afa4a41ea61b590a672d768ae47ee6707 not found: ID does not exist"
	Sep 23 12:41:38 addons-052630 kubelet[1207]: I0923 12:41:38.955802    1207 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6d098fb8-3ff9-4429-a01a-80cb0eabbfce-webhook-cert\") on node \"addons-052630\" DevicePath \"\""
	Sep 23 12:41:38 addons-052630 kubelet[1207]: I0923 12:41:38.955839    1207 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-t5k98\" (UniqueName: \"kubernetes.io/projected/6d098fb8-3ff9-4429-a01a-80cb0eabbfce-kube-api-access-t5k98\") on node \"addons-052630\" DevicePath \"\""
	Sep 23 12:41:39 addons-052630 kubelet[1207]: I0923 12:41:39.125252    1207 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d098fb8-3ff9-4429-a01a-80cb0eabbfce" path="/var/lib/kubelet/pods/6d098fb8-3ff9-4429-a01a-80cb0eabbfce/volumes"
	Sep 23 12:41:43 addons-052630 kubelet[1207]: E0923 12:41:43.125488    1207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="54b6f502-dc45-4c6f-b200-f29eb7e0a0c3"
	Sep 23 12:41:43 addons-052630 kubelet[1207]: I0923 12:41:43.136437    1207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-qzcw6" podStartSLOduration=8.698233 podStartE2EDuration="11.136406476s" podCreationTimestamp="2024-09-23 12:41:32 +0000 UTC" firstStartedPulling="2024-09-23 12:41:33.309346691 +0000 UTC m=+750.321944366" lastFinishedPulling="2024-09-23 12:41:35.747520167 +0000 UTC m=+752.760117842" observedRunningTime="2024-09-23 12:41:36.87368355 +0000 UTC m=+753.886281254" watchObservedRunningTime="2024-09-23 12:41:43.136406476 +0000 UTC m=+760.149004171"
	Sep 23 12:41:43 addons-052630 kubelet[1207]: E0923 12:41:43.521132    1207 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727095303520808639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 12:41:43 addons-052630 kubelet[1207]: E0923 12:41:43.521157    1207 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727095303520808639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [58bbd55bde08fee5d7aeb446829fa511ea633c6594a8f94dbc19f40954380b59] <==
	I0923 12:29:15.418528       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 12:29:15.469448       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 12:29:15.469505       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 12:29:15.499374       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 12:29:15.512080       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-052630_666822b7-806c-46b8-b021-ef12b62fd031!
	I0923 12:29:15.512828       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"16ab68d2-163f-4497-86c2-19800b48c856", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-052630_666822b7-806c-46b8-b021-ef12b62fd031 became leader
	I0923 12:29:15.856800       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-052630_666822b7-806c-46b8-b021-ef12b62fd031!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-052630 -n addons-052630
helpers_test.go:261: (dbg) Run:  kubectl --context addons-052630 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-052630 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-052630 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-052630/192.168.39.225
	Start Time:       Mon, 23 Sep 2024 12:30:36 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hx7h2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hx7h2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/busybox to addons-052630
	  Normal   Pulling    9m46s (x4 over 11m)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     9m46s (x4 over 11m)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     9m46s (x4 over 11m)  kubelet            Error: ErrImagePull
	  Warning  Failed     9m19s (x6 over 11m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    55s (x43 over 11m)   kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.21s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (355.95s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 4.047362ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-2rhln" [e7c5ceb3-389e-43ff-b807-718f23f12b0f] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.114672104s
addons_test.go:413: (dbg) Run:  kubectl --context addons-052630 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-052630 top pods -n kube-system: exit status 1 (110.453022ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cvw7x, age: 9m37.739259625s

                                                
                                                
** /stderr **
I0923 12:38:45.742538  669447 retry.go:31] will retry after 2.205151553s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-052630 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-052630 top pods -n kube-system: exit status 1 (70.302173ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cvw7x, age: 9m40.016568765s

                                                
                                                
** /stderr **
I0923 12:38:48.019218  669447 retry.go:31] will retry after 5.499728166s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-052630 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-052630 top pods -n kube-system: exit status 1 (77.334594ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cvw7x, age: 9m45.595006152s

                                                
                                                
** /stderr **
I0923 12:38:53.597192  669447 retry.go:31] will retry after 8.126034377s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-052630 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-052630 top pods -n kube-system: exit status 1 (87.053253ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cvw7x, age: 9m53.809223768s

                                                
                                                
** /stderr **
I0923 12:39:01.811634  669447 retry.go:31] will retry after 7.843924084s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-052630 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-052630 top pods -n kube-system: exit status 1 (66.908164ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cvw7x, age: 10m1.721228885s

                                                
                                                
** /stderr **
I0923 12:39:09.723502  669447 retry.go:31] will retry after 11.377894225s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-052630 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-052630 top pods -n kube-system: exit status 1 (72.056154ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cvw7x, age: 10m13.172116793s

                                                
                                                
** /stderr **
I0923 12:39:21.174378  669447 retry.go:31] will retry after 11.744774157s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-052630 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-052630 top pods -n kube-system: exit status 1 (92.883481ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cvw7x, age: 10m25.010167585s

                                                
                                                
** /stderr **
I0923 12:39:33.012476  669447 retry.go:31] will retry after 44.306539174s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-052630 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-052630 top pods -n kube-system: exit status 1 (74.572207ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cvw7x, age: 11m9.391004794s

                                                
                                                
** /stderr **
I0923 12:40:17.393942  669447 retry.go:31] will retry after 56.485108703s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-052630 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-052630 top pods -n kube-system: exit status 1 (73.024856ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cvw7x, age: 12m5.949742566s

                                                
                                                
** /stderr **
I0923 12:41:13.952731  669447 retry.go:31] will retry after 1m15.676699544s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-052630 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-052630 top pods -n kube-system: exit status 1 (72.178955ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cvw7x, age: 13m21.70127423s

                                                
                                                
** /stderr **
I0923 12:42:29.704367  669447 retry.go:31] will retry after 1m3.261691527s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-052630 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-052630 top pods -n kube-system: exit status 1 (69.800743ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cvw7x, age: 14m25.033591981s

                                                
                                                
** /stderr **
I0923 12:43:33.036363  669447 retry.go:31] will retry after 59.55607985s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-052630 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-052630 top pods -n kube-system: exit status 1 (68.195808ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cvw7x, age: 15m24.660017419s

                                                
                                                
** /stderr **
addons_test.go:427: failed checking metric server: exit status 1
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-052630 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-052630 -n addons-052630
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-052630 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-052630 logs -n 25: (1.317319084s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-473947                                                                     | download-only-473947 | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC | 23 Sep 24 12:28 UTC |
	| delete  | -p download-only-832165                                                                     | download-only-832165 | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC | 23 Sep 24 12:28 UTC |
	| delete  | -p download-only-473947                                                                     | download-only-473947 | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC | 23 Sep 24 12:28 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-529103 | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC |                     |
	|         | binary-mirror-529103                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:35373                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-529103                                                                     | binary-mirror-529103 | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC | 23 Sep 24 12:28 UTC |
	| addons  | disable dashboard -p                                                                        | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC |                     |
	|         | addons-052630                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC |                     |
	|         | addons-052630                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-052630 --wait=true                                                                | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC | 23 Sep 24 12:30 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:38 UTC | 23 Sep 24 12:38 UTC |
	|         | -p addons-052630                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-052630 addons disable                                                                | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:38 UTC | 23 Sep 24 12:38 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC | 23 Sep 24 12:39 UTC |
	|         | addons-052630                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-052630 ssh curl -s                                                                   | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-052630 addons                                                                        | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC | 23 Sep 24 12:39 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-052630 addons                                                                        | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC | 23 Sep 24 12:39 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC | 23 Sep 24 12:39 UTC |
	|         | -p addons-052630                                                                            |                      |         |         |                     |                     |
	| addons  | addons-052630 addons disable                                                                | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC | 23 Sep 24 12:39 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ip      | addons-052630 ip                                                                            | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC | 23 Sep 24 12:39 UTC |
	| addons  | addons-052630 addons disable                                                                | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC | 23 Sep 24 12:39 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:40 UTC | 23 Sep 24 12:40 UTC |
	|         | addons-052630                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-052630 ssh cat                                                                       | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:40 UTC | 23 Sep 24 12:40 UTC |
	|         | /opt/local-path-provisioner/pvc-5738aee6-f638-4bad-bf82-f8a96b05fb86_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-052630 addons disable                                                                | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:40 UTC | 23 Sep 24 12:40 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-052630 ip                                                                            | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:41 UTC | 23 Sep 24 12:41 UTC |
	| addons  | addons-052630 addons disable                                                                | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:41 UTC | 23 Sep 24 12:41 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-052630 addons disable                                                                | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:41 UTC | 23 Sep 24 12:41 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-052630 addons                                                                        | addons-052630        | jenkins | v1.34.0 | 23 Sep 24 12:44 UTC | 23 Sep 24 12:44 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 12:28:24
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 12:28:24.813371  670144 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:28:24.813646  670144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:28:24.813655  670144 out.go:358] Setting ErrFile to fd 2...
	I0923 12:28:24.813660  670144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:28:24.813860  670144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-662205/.minikube/bin
	I0923 12:28:24.814564  670144 out.go:352] Setting JSON to false
	I0923 12:28:24.815641  670144 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7848,"bootTime":1727086657,"procs":321,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 12:28:24.815741  670144 start.go:139] virtualization: kvm guest
	I0923 12:28:24.818077  670144 out.go:177] * [addons-052630] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 12:28:24.819427  670144 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 12:28:24.819496  670144 notify.go:220] Checking for updates...
	I0923 12:28:24.821743  670144 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:28:24.823109  670144 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 12:28:24.824398  670144 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:28:24.825560  670144 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 12:28:24.826608  670144 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 12:28:24.827862  670144 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:28:24.861163  670144 out.go:177] * Using the kvm2 driver based on user configuration
	I0923 12:28:24.862619  670144 start.go:297] selected driver: kvm2
	I0923 12:28:24.862645  670144 start.go:901] validating driver "kvm2" against <nil>
	I0923 12:28:24.862661  670144 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 12:28:24.863497  670144 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:28:24.863608  670144 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19690-662205/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 12:28:24.879912  670144 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 12:28:24.879978  670144 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 12:28:24.880260  670144 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:28:24.880303  670144 cni.go:84] Creating CNI manager for ""
	I0923 12:28:24.880362  670144 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 12:28:24.880373  670144 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 12:28:24.880464  670144 start.go:340] cluster config:
	{Name:addons-052630 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-052630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:28:24.880601  670144 iso.go:125] acquiring lock: {Name:mkb968a95eae3838cd5c328cf3385c2ef4ff2c8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:28:24.882416  670144 out.go:177] * Starting "addons-052630" primary control-plane node in "addons-052630" cluster
	I0923 12:28:24.883605  670144 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 12:28:24.883654  670144 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 12:28:24.883668  670144 cache.go:56] Caching tarball of preloaded images
	I0923 12:28:24.883756  670144 preload.go:172] Found /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 12:28:24.883772  670144 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 12:28:24.884127  670144 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/config.json ...
	I0923 12:28:24.884158  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/config.json: {Name:mk8f8b007c3bc269ac83b2216416a2c7aa34749b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:24.884352  670144 start.go:360] acquireMachinesLock for addons-052630: {Name:mka98570d4b4becad22300323f1f88e64743eec3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 12:28:24.884434  670144 start.go:364] duration metric: took 46.812µs to acquireMachinesLock for "addons-052630"
	I0923 12:28:24.884466  670144 start.go:93] Provisioning new machine with config: &{Name:addons-052630 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-052630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:28:24.884576  670144 start.go:125] createHost starting for "" (driver="kvm2")
	I0923 12:28:24.886275  670144 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0923 12:28:24.886477  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:28:24.886532  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:28:24.901608  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35689
	I0923 12:28:24.902121  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:28:24.902783  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:28:24.902809  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:28:24.903341  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:28:24.903572  670144 main.go:141] libmachine: (addons-052630) Calling .GetMachineName
	I0923 12:28:24.903730  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:28:24.903901  670144 start.go:159] libmachine.API.Create for "addons-052630" (driver="kvm2")
	I0923 12:28:24.903933  670144 client.go:168] LocalClient.Create starting
	I0923 12:28:24.903984  670144 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem
	I0923 12:28:24.971472  670144 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem
	I0923 12:28:25.199996  670144 main.go:141] libmachine: Running pre-create checks...
	I0923 12:28:25.200025  670144 main.go:141] libmachine: (addons-052630) Calling .PreCreateCheck
	I0923 12:28:25.200603  670144 main.go:141] libmachine: (addons-052630) Calling .GetConfigRaw
	I0923 12:28:25.201064  670144 main.go:141] libmachine: Creating machine...
	I0923 12:28:25.201081  670144 main.go:141] libmachine: (addons-052630) Calling .Create
	I0923 12:28:25.201318  670144 main.go:141] libmachine: (addons-052630) Creating KVM machine...
	I0923 12:28:25.202978  670144 main.go:141] libmachine: (addons-052630) DBG | found existing default KVM network
	I0923 12:28:25.203985  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:25.203807  670166 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231f0}
	I0923 12:28:25.204034  670144 main.go:141] libmachine: (addons-052630) DBG | created network xml: 
	I0923 12:28:25.204055  670144 main.go:141] libmachine: (addons-052630) DBG | <network>
	I0923 12:28:25.204063  670144 main.go:141] libmachine: (addons-052630) DBG |   <name>mk-addons-052630</name>
	I0923 12:28:25.204070  670144 main.go:141] libmachine: (addons-052630) DBG |   <dns enable='no'/>
	I0923 12:28:25.204076  670144 main.go:141] libmachine: (addons-052630) DBG |   
	I0923 12:28:25.204082  670144 main.go:141] libmachine: (addons-052630) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0923 12:28:25.204088  670144 main.go:141] libmachine: (addons-052630) DBG |     <dhcp>
	I0923 12:28:25.204093  670144 main.go:141] libmachine: (addons-052630) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0923 12:28:25.204101  670144 main.go:141] libmachine: (addons-052630) DBG |     </dhcp>
	I0923 12:28:25.204105  670144 main.go:141] libmachine: (addons-052630) DBG |   </ip>
	I0923 12:28:25.204112  670144 main.go:141] libmachine: (addons-052630) DBG |   
	I0923 12:28:25.204119  670144 main.go:141] libmachine: (addons-052630) DBG | </network>
	I0923 12:28:25.204129  670144 main.go:141] libmachine: (addons-052630) DBG | 
	I0923 12:28:25.209600  670144 main.go:141] libmachine: (addons-052630) DBG | trying to create private KVM network mk-addons-052630 192.168.39.0/24...
	I0923 12:28:25.278429  670144 main.go:141] libmachine: (addons-052630) Setting up store path in /home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630 ...
	I0923 12:28:25.278462  670144 main.go:141] libmachine: (addons-052630) DBG | private KVM network mk-addons-052630 192.168.39.0/24 created
	I0923 12:28:25.278471  670144 main.go:141] libmachine: (addons-052630) Building disk image from file:///home/jenkins/minikube-integration/19690-662205/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 12:28:25.278507  670144 main.go:141] libmachine: (addons-052630) Downloading /home/jenkins/minikube-integration/19690-662205/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19690-662205/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 12:28:25.278523  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:25.278366  670166 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:28:25.561478  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:25.561306  670166 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa...
	I0923 12:28:25.781646  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:25.781463  670166 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/addons-052630.rawdisk...
	I0923 12:28:25.781686  670144 main.go:141] libmachine: (addons-052630) DBG | Writing magic tar header
	I0923 12:28:25.781699  670144 main.go:141] libmachine: (addons-052630) DBG | Writing SSH key tar header
	I0923 12:28:25.781710  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:25.781618  670166 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630 ...
	I0923 12:28:25.781843  670144 main.go:141] libmachine: (addons-052630) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630
	I0923 12:28:25.781876  670144 main.go:141] libmachine: (addons-052630) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630 (perms=drwx------)
	I0923 12:28:25.781893  670144 main.go:141] libmachine: (addons-052630) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube/machines
	I0923 12:28:25.781906  670144 main.go:141] libmachine: (addons-052630) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube/machines (perms=drwxr-xr-x)
	I0923 12:28:25.781926  670144 main.go:141] libmachine: (addons-052630) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:28:25.781942  670144 main.go:141] libmachine: (addons-052630) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube (perms=drwxr-xr-x)
	I0923 12:28:25.781979  670144 main.go:141] libmachine: (addons-052630) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205 (perms=drwxrwxr-x)
	I0923 12:28:25.781995  670144 main.go:141] libmachine: (addons-052630) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205
	I0923 12:28:25.782008  670144 main.go:141] libmachine: (addons-052630) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 12:28:25.782019  670144 main.go:141] libmachine: (addons-052630) DBG | Checking permissions on dir: /home/jenkins
	I0923 12:28:25.782030  670144 main.go:141] libmachine: (addons-052630) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 12:28:25.782042  670144 main.go:141] libmachine: (addons-052630) DBG | Checking permissions on dir: /home
	I0923 12:28:25.782054  670144 main.go:141] libmachine: (addons-052630) DBG | Skipping /home - not owner
	I0923 12:28:25.782073  670144 main.go:141] libmachine: (addons-052630) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 12:28:25.782083  670144 main.go:141] libmachine: (addons-052630) Creating domain...
	I0923 12:28:25.783344  670144 main.go:141] libmachine: (addons-052630) define libvirt domain using xml: 
	I0923 12:28:25.783364  670144 main.go:141] libmachine: (addons-052630) <domain type='kvm'>
	I0923 12:28:25.783372  670144 main.go:141] libmachine: (addons-052630)   <name>addons-052630</name>
	I0923 12:28:25.783376  670144 main.go:141] libmachine: (addons-052630)   <memory unit='MiB'>4000</memory>
	I0923 12:28:25.783381  670144 main.go:141] libmachine: (addons-052630)   <vcpu>2</vcpu>
	I0923 12:28:25.783385  670144 main.go:141] libmachine: (addons-052630)   <features>
	I0923 12:28:25.783390  670144 main.go:141] libmachine: (addons-052630)     <acpi/>
	I0923 12:28:25.783396  670144 main.go:141] libmachine: (addons-052630)     <apic/>
	I0923 12:28:25.783403  670144 main.go:141] libmachine: (addons-052630)     <pae/>
	I0923 12:28:25.783409  670144 main.go:141] libmachine: (addons-052630)     
	I0923 12:28:25.783417  670144 main.go:141] libmachine: (addons-052630)   </features>
	I0923 12:28:25.783427  670144 main.go:141] libmachine: (addons-052630)   <cpu mode='host-passthrough'>
	I0923 12:28:25.783435  670144 main.go:141] libmachine: (addons-052630)   
	I0923 12:28:25.783446  670144 main.go:141] libmachine: (addons-052630)   </cpu>
	I0923 12:28:25.783453  670144 main.go:141] libmachine: (addons-052630)   <os>
	I0923 12:28:25.783463  670144 main.go:141] libmachine: (addons-052630)     <type>hvm</type>
	I0923 12:28:25.783477  670144 main.go:141] libmachine: (addons-052630)     <boot dev='cdrom'/>
	I0923 12:28:25.783486  670144 main.go:141] libmachine: (addons-052630)     <boot dev='hd'/>
	I0923 12:28:25.783493  670144 main.go:141] libmachine: (addons-052630)     <bootmenu enable='no'/>
	I0923 12:28:25.783502  670144 main.go:141] libmachine: (addons-052630)   </os>
	I0923 12:28:25.783511  670144 main.go:141] libmachine: (addons-052630)   <devices>
	I0923 12:28:25.783529  670144 main.go:141] libmachine: (addons-052630)     <disk type='file' device='cdrom'>
	I0923 12:28:25.783552  670144 main.go:141] libmachine: (addons-052630)       <source file='/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/boot2docker.iso'/>
	I0923 12:28:25.783577  670144 main.go:141] libmachine: (addons-052630)       <target dev='hdc' bus='scsi'/>
	I0923 12:28:25.783588  670144 main.go:141] libmachine: (addons-052630)       <readonly/>
	I0923 12:28:25.783595  670144 main.go:141] libmachine: (addons-052630)     </disk>
	I0923 12:28:25.783607  670144 main.go:141] libmachine: (addons-052630)     <disk type='file' device='disk'>
	I0923 12:28:25.783618  670144 main.go:141] libmachine: (addons-052630)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 12:28:25.783633  670144 main.go:141] libmachine: (addons-052630)       <source file='/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/addons-052630.rawdisk'/>
	I0923 12:28:25.783643  670144 main.go:141] libmachine: (addons-052630)       <target dev='hda' bus='virtio'/>
	I0923 12:28:25.783719  670144 main.go:141] libmachine: (addons-052630)     </disk>
	I0923 12:28:25.783743  670144 main.go:141] libmachine: (addons-052630)     <interface type='network'>
	I0923 12:28:25.783752  670144 main.go:141] libmachine: (addons-052630)       <source network='mk-addons-052630'/>
	I0923 12:28:25.783766  670144 main.go:141] libmachine: (addons-052630)       <model type='virtio'/>
	I0923 12:28:25.783776  670144 main.go:141] libmachine: (addons-052630)     </interface>
	I0923 12:28:25.783789  670144 main.go:141] libmachine: (addons-052630)     <interface type='network'>
	I0923 12:28:25.783807  670144 main.go:141] libmachine: (addons-052630)       <source network='default'/>
	I0923 12:28:25.783821  670144 main.go:141] libmachine: (addons-052630)       <model type='virtio'/>
	I0923 12:28:25.783832  670144 main.go:141] libmachine: (addons-052630)     </interface>
	I0923 12:28:25.783845  670144 main.go:141] libmachine: (addons-052630)     <serial type='pty'>
	I0923 12:28:25.783856  670144 main.go:141] libmachine: (addons-052630)       <target port='0'/>
	I0923 12:28:25.783866  670144 main.go:141] libmachine: (addons-052630)     </serial>
	I0923 12:28:25.783878  670144 main.go:141] libmachine: (addons-052630)     <console type='pty'>
	I0923 12:28:25.783909  670144 main.go:141] libmachine: (addons-052630)       <target type='serial' port='0'/>
	I0923 12:28:25.783928  670144 main.go:141] libmachine: (addons-052630)     </console>
	I0923 12:28:25.783942  670144 main.go:141] libmachine: (addons-052630)     <rng model='virtio'>
	I0923 12:28:25.783955  670144 main.go:141] libmachine: (addons-052630)       <backend model='random'>/dev/random</backend>
	I0923 12:28:25.783971  670144 main.go:141] libmachine: (addons-052630)     </rng>
	I0923 12:28:25.783993  670144 main.go:141] libmachine: (addons-052630)     
	I0923 12:28:25.784002  670144 main.go:141] libmachine: (addons-052630)     
	I0923 12:28:25.784006  670144 main.go:141] libmachine: (addons-052630)   </devices>
	I0923 12:28:25.784016  670144 main.go:141] libmachine: (addons-052630) </domain>
	I0923 12:28:25.784025  670144 main.go:141] libmachine: (addons-052630) 
	I0923 12:28:25.788537  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:fa:ec:fb in network default
	I0923 12:28:25.789254  670144 main.go:141] libmachine: (addons-052630) Ensuring networks are active...
	I0923 12:28:25.789279  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:25.790127  670144 main.go:141] libmachine: (addons-052630) Ensuring network default is active
	I0923 12:28:25.790514  670144 main.go:141] libmachine: (addons-052630) Ensuring network mk-addons-052630 is active
	I0923 12:28:25.791168  670144 main.go:141] libmachine: (addons-052630) Getting domain xml...
	I0923 12:28:25.792095  670144 main.go:141] libmachine: (addons-052630) Creating domain...
	I0923 12:28:27.038227  670144 main.go:141] libmachine: (addons-052630) Waiting to get IP...
	I0923 12:28:27.038933  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:27.039372  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:27.039471  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:27.039378  670166 retry.go:31] will retry after 209.573222ms: waiting for machine to come up
	I0923 12:28:27.250785  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:27.251320  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:27.251357  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:27.251238  670166 retry.go:31] will retry after 325.370385ms: waiting for machine to come up
	I0923 12:28:27.577921  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:27.578545  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:27.578574  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:27.578492  670166 retry.go:31] will retry after 474.794229ms: waiting for machine to come up
	I0923 12:28:28.055184  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:28.055670  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:28.055696  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:28.055630  670166 retry.go:31] will retry after 474.62618ms: waiting for machine to come up
	I0923 12:28:28.532060  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:28.532544  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:28.532570  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:28.532497  670166 retry.go:31] will retry after 466.59648ms: waiting for machine to come up
	I0923 12:28:29.001527  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:29.002034  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:29.002061  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:29.001954  670166 retry.go:31] will retry after 665.819727ms: waiting for machine to come up
	I0923 12:28:29.670150  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:29.670557  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:29.670586  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:29.670496  670166 retry.go:31] will retry after 826.725256ms: waiting for machine to come up
	I0923 12:28:30.499346  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:30.499773  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:30.499804  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:30.499717  670166 retry.go:31] will retry after 1.111672977s: waiting for machine to come up
	I0923 12:28:31.612864  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:31.613371  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:31.613397  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:31.613333  670166 retry.go:31] will retry after 1.267221609s: waiting for machine to come up
	I0923 12:28:32.882782  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:32.883202  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:32.883225  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:32.883150  670166 retry.go:31] will retry after 2.15228845s: waiting for machine to come up
	I0923 12:28:35.036699  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:35.037202  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:35.037238  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:35.037140  670166 retry.go:31] will retry after 2.618330832s: waiting for machine to come up
	I0923 12:28:37.659044  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:37.659708  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:37.659740  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:37.659658  670166 retry.go:31] will retry after 3.182891363s: waiting for machine to come up
	I0923 12:28:40.843714  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:40.844042  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find current IP address of domain addons-052630 in network mk-addons-052630
	I0923 12:28:40.844066  670144 main.go:141] libmachine: (addons-052630) DBG | I0923 12:28:40.843990  670166 retry.go:31] will retry after 4.470723393s: waiting for machine to come up
	I0923 12:28:45.316645  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.317132  670144 main.go:141] libmachine: (addons-052630) Found IP for machine: 192.168.39.225
	I0923 12:28:45.317158  670144 main.go:141] libmachine: (addons-052630) Reserving static IP address...
	I0923 12:28:45.317201  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has current primary IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.317585  670144 main.go:141] libmachine: (addons-052630) DBG | unable to find host DHCP lease matching {name: "addons-052630", mac: "52:54:00:6d:fc:98", ip: "192.168.39.225"} in network mk-addons-052630
	I0923 12:28:45.396974  670144 main.go:141] libmachine: (addons-052630) Reserved static IP address: 192.168.39.225
	I0923 12:28:45.397017  670144 main.go:141] libmachine: (addons-052630) Waiting for SSH to be available...
	I0923 12:28:45.397030  670144 main.go:141] libmachine: (addons-052630) DBG | Getting to WaitForSSH function...
	I0923 12:28:45.399773  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.400242  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:45.400280  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.400442  670144 main.go:141] libmachine: (addons-052630) DBG | Using SSH client type: external
	I0923 12:28:45.400468  670144 main.go:141] libmachine: (addons-052630) DBG | Using SSH private key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa (-rw-------)
	I0923 12:28:45.400508  670144 main.go:141] libmachine: (addons-052630) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 12:28:45.400526  670144 main.go:141] libmachine: (addons-052630) DBG | About to run SSH command:
	I0923 12:28:45.400541  670144 main.go:141] libmachine: (addons-052630) DBG | exit 0
	I0923 12:28:45.526239  670144 main.go:141] libmachine: (addons-052630) DBG | SSH cmd err, output: <nil>: 
	I0923 12:28:45.526548  670144 main.go:141] libmachine: (addons-052630) KVM machine creation complete!
	I0923 12:28:45.526929  670144 main.go:141] libmachine: (addons-052630) Calling .GetConfigRaw
	I0923 12:28:45.527556  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:28:45.527717  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:28:45.527840  670144 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 12:28:45.527856  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:28:45.529429  670144 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 12:28:45.529452  670144 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 12:28:45.529459  670144 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 12:28:45.529467  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:45.531511  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.531931  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:45.531976  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.532096  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:45.532276  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:45.532439  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:45.532595  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:45.532719  670144 main.go:141] libmachine: Using SSH client type: native
	I0923 12:28:45.532912  670144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0923 12:28:45.532928  670144 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 12:28:45.641401  670144 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:28:45.641429  670144 main.go:141] libmachine: Detecting the provisioner...
	I0923 12:28:45.641436  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:45.644203  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.644585  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:45.644605  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.644794  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:45.645002  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:45.645132  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:45.645234  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:45.645389  670144 main.go:141] libmachine: Using SSH client type: native
	I0923 12:28:45.645579  670144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0923 12:28:45.645589  670144 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 12:28:45.754409  670144 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 12:28:45.754564  670144 main.go:141] libmachine: found compatible host: buildroot
	I0923 12:28:45.754586  670144 main.go:141] libmachine: Provisioning with buildroot...
	I0923 12:28:45.754597  670144 main.go:141] libmachine: (addons-052630) Calling .GetMachineName
	I0923 12:28:45.754895  670144 buildroot.go:166] provisioning hostname "addons-052630"
	I0923 12:28:45.754923  670144 main.go:141] libmachine: (addons-052630) Calling .GetMachineName
	I0923 12:28:45.755128  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:45.758313  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.758762  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:45.758793  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.758946  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:45.759146  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:45.759329  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:45.759482  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:45.759643  670144 main.go:141] libmachine: Using SSH client type: native
	I0923 12:28:45.759825  670144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0923 12:28:45.759836  670144 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-052630 && echo "addons-052630" | sudo tee /etc/hostname
	I0923 12:28:45.884101  670144 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-052630
	
	I0923 12:28:45.884147  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:45.886809  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.887156  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:45.887190  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:45.887396  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:45.887621  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:45.887844  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:45.887995  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:45.888203  670144 main.go:141] libmachine: Using SSH client type: native
	I0923 12:28:45.888386  670144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0923 12:28:45.888401  670144 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-052630' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-052630/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-052630' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:28:46.010925  670144 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:28:46.010962  670144 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19690-662205/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-662205/.minikube}
	I0923 12:28:46.011014  670144 buildroot.go:174] setting up certificates
	I0923 12:28:46.011029  670144 provision.go:84] configureAuth start
	I0923 12:28:46.011047  670144 main.go:141] libmachine: (addons-052630) Calling .GetMachineName
	I0923 12:28:46.011410  670144 main.go:141] libmachine: (addons-052630) Calling .GetIP
	I0923 12:28:46.014459  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.014799  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.014825  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.014976  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:46.017411  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.017737  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.017810  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.017885  670144 provision.go:143] copyHostCerts
	I0923 12:28:46.017961  670144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem (1082 bytes)
	I0923 12:28:46.018127  670144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem (1123 bytes)
	I0923 12:28:46.018208  670144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem (1675 bytes)
	I0923 12:28:46.018272  670144 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem org=jenkins.addons-052630 san=[127.0.0.1 192.168.39.225 addons-052630 localhost minikube]
	I0923 12:28:46.112323  670144 provision.go:177] copyRemoteCerts
	I0923 12:28:46.112412  670144 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:28:46.112450  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:46.115251  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.115655  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.115682  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.115895  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:46.116119  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:46.116317  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:46.116487  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:28:46.199745  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 12:28:46.222501  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 12:28:46.245931  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 12:28:46.268307  670144 provision.go:87] duration metric: took 257.259613ms to configureAuth
	I0923 12:28:46.268338  670144 buildroot.go:189] setting minikube options for container-runtime
	I0923 12:28:46.268561  670144 config.go:182] Loaded profile config "addons-052630": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:28:46.268643  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:46.271831  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.272263  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.272294  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.272469  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:46.272699  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:46.272868  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:46.273026  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:46.273169  670144 main.go:141] libmachine: Using SSH client type: native
	I0923 12:28:46.273365  670144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0923 12:28:46.273385  670144 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 12:28:46.493088  670144 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 12:28:46.493128  670144 main.go:141] libmachine: Checking connection to Docker...
	I0923 12:28:46.493136  670144 main.go:141] libmachine: (addons-052630) Calling .GetURL
	I0923 12:28:46.494629  670144 main.go:141] libmachine: (addons-052630) DBG | Using libvirt version 6000000
	I0923 12:28:46.496809  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.497168  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.497204  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.497405  670144 main.go:141] libmachine: Docker is up and running!
	I0923 12:28:46.497422  670144 main.go:141] libmachine: Reticulating splines...
	I0923 12:28:46.497430  670144 client.go:171] duration metric: took 21.593485371s to LocalClient.Create
	I0923 12:28:46.497459  670144 start.go:167] duration metric: took 21.593561276s to libmachine.API.Create "addons-052630"
	I0923 12:28:46.497469  670144 start.go:293] postStartSetup for "addons-052630" (driver="kvm2")
	I0923 12:28:46.497479  670144 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:28:46.497499  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:28:46.497777  670144 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:28:46.497812  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:46.501032  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.501490  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.501519  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.501865  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:46.502081  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:46.502366  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:46.502522  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:28:46.587938  670144 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:28:46.592031  670144 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 12:28:46.592074  670144 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/addons for local assets ...
	I0923 12:28:46.592166  670144 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/files for local assets ...
	I0923 12:28:46.592204  670144 start.go:296] duration metric: took 94.729785ms for postStartSetup
	I0923 12:28:46.592263  670144 main.go:141] libmachine: (addons-052630) Calling .GetConfigRaw
	I0923 12:28:46.592996  670144 main.go:141] libmachine: (addons-052630) Calling .GetIP
	I0923 12:28:46.595992  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.596372  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.596398  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.596737  670144 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/config.json ...
	I0923 12:28:46.596934  670144 start.go:128] duration metric: took 21.712346872s to createHost
	I0923 12:28:46.596958  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:46.599418  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.599733  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.599767  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.599907  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:46.600079  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:46.600203  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:46.600310  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:46.600443  670144 main.go:141] libmachine: Using SSH client type: native
	I0923 12:28:46.600620  670144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0923 12:28:46.600630  670144 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 12:28:46.710677  670144 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727094526.683192770
	
	I0923 12:28:46.710703  670144 fix.go:216] guest clock: 1727094526.683192770
	I0923 12:28:46.710711  670144 fix.go:229] Guest: 2024-09-23 12:28:46.68319277 +0000 UTC Remote: 2024-09-23 12:28:46.596946256 +0000 UTC m=+21.821646719 (delta=86.246514ms)
	I0923 12:28:46.710733  670144 fix.go:200] guest clock delta is within tolerance: 86.246514ms
	I0923 12:28:46.710738  670144 start.go:83] releasing machines lock for "addons-052630", held for 21.826289183s
	I0923 12:28:46.710760  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:28:46.711055  670144 main.go:141] libmachine: (addons-052630) Calling .GetIP
	I0923 12:28:46.713772  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.714188  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.714222  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.714387  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:28:46.714956  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:28:46.715183  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:28:46.715309  670144 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 12:28:46.715383  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:46.715446  670144 ssh_runner.go:195] Run: cat /version.json
	I0923 12:28:46.715472  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:28:46.718318  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.718628  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.718658  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.718683  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.718845  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:46.719062  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:46.719075  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:46.719096  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:46.719238  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:46.719257  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:28:46.719450  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:28:46.719450  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:28:46.719543  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:28:46.719701  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:28:46.832898  670144 ssh_runner.go:195] Run: systemctl --version
	I0923 12:28:46.838565  670144 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 12:28:46.993556  670144 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 12:28:46.999180  670144 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 12:28:46.999247  670144 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 12:28:47.014650  670144 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 12:28:47.014678  670144 start.go:495] detecting cgroup driver to use...
	I0923 12:28:47.014749  670144 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 12:28:47.031900  670144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:28:47.045836  670144 docker.go:217] disabling cri-docker service (if available) ...
	I0923 12:28:47.045894  670144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 12:28:47.059242  670144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 12:28:47.072860  670144 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 12:28:47.194879  670144 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 12:28:47.358066  670144 docker.go:233] disabling docker service ...
	I0923 12:28:47.358133  670144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 12:28:47.371586  670144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 12:28:47.384467  670144 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 12:28:47.500779  670144 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 12:28:47.617653  670144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 12:28:47.631869  670144 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:28:47.649294  670144 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 12:28:47.649381  670144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:28:47.659959  670144 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 12:28:47.660033  670144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:28:47.670550  670144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:28:47.680493  670144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:28:47.691259  670144 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 12:28:47.702167  670144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:28:47.712481  670144 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:28:47.729016  670144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:28:47.738741  670144 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 12:28:47.747902  670144 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 12:28:47.747976  670144 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 12:28:47.759825  670144 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 12:28:47.770483  670144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:28:47.890638  670144 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 12:28:47.979539  670144 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 12:28:47.979633  670144 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 12:28:47.984471  670144 start.go:563] Will wait 60s for crictl version
	I0923 12:28:47.984558  670144 ssh_runner.go:195] Run: which crictl
	I0923 12:28:47.988396  670144 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 12:28:48.030420  670144 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 12:28:48.030521  670144 ssh_runner.go:195] Run: crio --version
	I0923 12:28:48.056969  670144 ssh_runner.go:195] Run: crio --version
	I0923 12:28:48.087115  670144 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 12:28:48.088250  670144 main.go:141] libmachine: (addons-052630) Calling .GetIP
	I0923 12:28:48.091126  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:48.091525  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:28:48.091557  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:28:48.091833  670144 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 12:28:48.095821  670144 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:28:48.107261  670144 kubeadm.go:883] updating cluster {Name:addons-052630 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-052630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 12:28:48.107375  670144 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 12:28:48.107425  670144 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 12:28:48.137489  670144 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0923 12:28:48.137564  670144 ssh_runner.go:195] Run: which lz4
	I0923 12:28:48.141366  670144 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 12:28:48.145228  670144 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 12:28:48.145266  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0923 12:28:49.300797  670144 crio.go:462] duration metric: took 1.159457126s to copy over tarball
	I0923 12:28:49.300880  670144 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 12:28:51.403387  670144 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.10247438s)
	I0923 12:28:51.403418  670144 crio.go:469] duration metric: took 2.102584932s to extract the tarball
	I0923 12:28:51.403426  670144 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 12:28:51.439644  670144 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 12:28:51.487343  670144 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 12:28:51.487372  670144 cache_images.go:84] Images are preloaded, skipping loading
	I0923 12:28:51.487380  670144 kubeadm.go:934] updating node { 192.168.39.225 8443 v1.31.1 crio true true} ...
	I0923 12:28:51.487484  670144 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-052630 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-052630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 12:28:51.487549  670144 ssh_runner.go:195] Run: crio config
	I0923 12:28:51.529159  670144 cni.go:84] Creating CNI manager for ""
	I0923 12:28:51.529194  670144 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 12:28:51.529211  670144 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 12:28:51.529243  670144 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.225 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-052630 NodeName:addons-052630 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 12:28:51.529421  670144 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-052630"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 12:28:51.529489  670144 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 12:28:51.538786  670144 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 12:28:51.538860  670144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 12:28:51.547357  670144 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0923 12:28:51.563034  670144 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 12:28:51.579309  670144 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0923 12:28:51.595202  670144 ssh_runner.go:195] Run: grep 192.168.39.225	control-plane.minikube.internal$ /etc/hosts
	I0923 12:28:51.598885  670144 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.225	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:28:51.610214  670144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:28:51.733757  670144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:28:51.750735  670144 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630 for IP: 192.168.39.225
	I0923 12:28:51.750770  670144 certs.go:194] generating shared ca certs ...
	I0923 12:28:51.750794  670144 certs.go:226] acquiring lock for ca certs: {Name:mk5f47b34d40554f07f6507fea971236e4735d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:51.751013  670144 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key
	I0923 12:28:51.991610  670144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt ...
	I0923 12:28:51.991645  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt: {Name:mk278617102c801f9caeeac933d8c272fa433146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:51.991889  670144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key ...
	I0923 12:28:51.991905  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key: {Name:mk95fd2f326ff7501892adf485a2ad45653eea64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:51.992016  670144 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key
	I0923 12:28:52.107448  670144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt ...
	I0923 12:28:52.107483  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt: {Name:mkab8a60190e4e6c41e7af4f15f6ef17b87ed124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:52.107687  670144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key ...
	I0923 12:28:52.107702  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key: {Name:mk02e351bcbba1d3a2fba48c9faa8507f1dc7f2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:52.107800  670144 certs.go:256] generating profile certs ...
	I0923 12:28:52.107883  670144 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.key
	I0923 12:28:52.107915  670144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt with IP's: []
	I0923 12:28:52.582241  670144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt ...
	I0923 12:28:52.582281  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: {Name:mkaf7ea4dbed68876d268afef229ce386755abe4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:52.582498  670144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.key ...
	I0923 12:28:52.582514  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.key: {Name:mkdce34cb498d97b74470517b32fdf3aa826f879 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:52.582615  670144 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.key.4809edca
	I0923 12:28:52.582638  670144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.crt.4809edca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.225]
	I0923 12:28:52.768950  670144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.crt.4809edca ...
	I0923 12:28:52.768994  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.crt.4809edca: {Name:mkbaa634fbd0b311944b39e34f00f96971e7ce59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:52.769251  670144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.key.4809edca ...
	I0923 12:28:52.769274  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.key.4809edca: {Name:mkf94e3b64c79f3950341d5ac1c59fe9bdbc9286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:52.769399  670144 certs.go:381] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.crt.4809edca -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.crt
	I0923 12:28:52.769586  670144 certs.go:385] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.key.4809edca -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.key
	I0923 12:28:52.769706  670144 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/proxy-client.key
	I0923 12:28:52.769730  670144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/proxy-client.crt with IP's: []
	I0923 12:28:52.993061  670144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/proxy-client.crt ...
	I0923 12:28:52.993100  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/proxy-client.crt: {Name:mkc6749530eb8ff541e082b9ac5787b31147fda9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:52.993317  670144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/proxy-client.key ...
	I0923 12:28:52.993335  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/proxy-client.key: {Name:mk1f12283a82c9b262b0a92c2d76e010fb6f0100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:28:52.993550  670144 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 12:28:52.993587  670144 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem (1082 bytes)
	I0923 12:28:52.993614  670144 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem (1123 bytes)
	I0923 12:28:52.993635  670144 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem (1675 bytes)
	I0923 12:28:52.994363  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 12:28:53.025659  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 12:28:53.052117  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 12:28:53.077309  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 12:28:53.103143  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 12:28:53.126620  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 12:28:53.149963  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 12:28:53.173855  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 12:28:53.197238  670144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 12:28:53.220421  670144 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 12:28:53.236569  670144 ssh_runner.go:195] Run: openssl version
	I0923 12:28:53.242319  670144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 12:28:53.253251  670144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:28:53.257949  670144 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 12:28 /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:28:53.258030  670144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:28:53.264286  670144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 12:28:53.275223  670144 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 12:28:53.279442  670144 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 12:28:53.279513  670144 kubeadm.go:392] StartCluster: {Name:addons-052630 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-052630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:28:53.279600  670144 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 12:28:53.279685  670144 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 12:28:53.314839  670144 cri.go:89] found id: ""
	I0923 12:28:53.314909  670144 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 12:28:53.327186  670144 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 12:28:53.336989  670144 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 12:28:53.361585  670144 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 12:28:53.361612  670144 kubeadm.go:157] found existing configuration files:
	
	I0923 12:28:53.361662  670144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 12:28:53.381977  670144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 12:28:53.382054  670144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 12:28:53.392118  670144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 12:28:53.401098  670144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 12:28:53.401165  670144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 12:28:53.410993  670144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 12:28:53.420212  670144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 12:28:53.420273  670144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 12:28:53.429796  670144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 12:28:53.439423  670144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 12:28:53.439499  670144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 12:28:53.449163  670144 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 12:28:53.502584  670144 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 12:28:53.502741  670144 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 12:28:53.605559  670144 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 12:28:53.605689  670144 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 12:28:53.605816  670144 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 12:28:53.618515  670144 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 12:28:53.836787  670144 out.go:235]   - Generating certificates and keys ...
	I0923 12:28:53.836912  670144 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 12:28:53.836995  670144 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 12:28:53.873040  670144 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 12:28:54.032114  670144 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 12:28:54.141767  670144 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 12:28:54.255622  670144 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 12:28:54.855891  670144 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 12:28:54.856105  670144 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-052630 localhost] and IPs [192.168.39.225 127.0.0.1 ::1]
	I0923 12:28:55.008507  670144 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 12:28:55.008690  670144 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-052630 localhost] and IPs [192.168.39.225 127.0.0.1 ::1]
	I0923 12:28:55.205727  670144 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 12:28:55.375985  670144 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 12:28:55.604036  670144 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 12:28:55.604271  670144 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 12:28:55.664982  670144 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 12:28:55.716232  670144 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 12:28:55.974342  670144 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 12:28:56.056044  670144 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 12:28:56.242837  670144 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 12:28:56.243301  670144 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 12:28:56.245752  670144 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 12:28:56.248113  670144 out.go:235]   - Booting up control plane ...
	I0923 12:28:56.248255  670144 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 12:28:56.248368  670144 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 12:28:56.248457  670144 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 12:28:56.267013  670144 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 12:28:56.273131  670144 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 12:28:56.273201  670144 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 12:28:56.405616  670144 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 12:28:56.405814  670144 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 12:28:57.405800  670144 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001202262s
	I0923 12:28:57.405948  670144 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 12:29:02.406200  670144 kubeadm.go:310] [api-check] The API server is healthy after 5.001766702s
	I0923 12:29:02.416901  670144 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 12:29:02.435826  670144 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 12:29:02.465176  670144 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 12:29:02.465450  670144 kubeadm.go:310] [mark-control-plane] Marking the node addons-052630 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 12:29:02.478428  670144 kubeadm.go:310] [bootstrap-token] Using token: 6nlf9d.x8d4dbn01qyxu2me
	I0923 12:29:02.480122  670144 out.go:235]   - Configuring RBAC rules ...
	I0923 12:29:02.480273  670144 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 12:29:02.484831  670144 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 12:29:02.498051  670144 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 12:29:02.506535  670144 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 12:29:02.510753  670144 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 12:29:02.514110  670144 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 12:29:02.816841  670144 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 12:29:03.265469  670144 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 12:29:03.814814  670144 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 12:29:03.815665  670144 kubeadm.go:310] 
	I0923 12:29:03.815740  670144 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 12:29:03.815754  670144 kubeadm.go:310] 
	I0923 12:29:03.815856  670144 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 12:29:03.815884  670144 kubeadm.go:310] 
	I0923 12:29:03.815943  670144 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 12:29:03.816033  670144 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 12:29:03.816112  670144 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 12:29:03.816122  670144 kubeadm.go:310] 
	I0923 12:29:03.816205  670144 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 12:29:03.816220  670144 kubeadm.go:310] 
	I0923 12:29:03.816283  670144 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 12:29:03.816292  670144 kubeadm.go:310] 
	I0923 12:29:03.816361  670144 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 12:29:03.816459  670144 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 12:29:03.816557  670144 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 12:29:03.816565  670144 kubeadm.go:310] 
	I0923 12:29:03.816662  670144 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 12:29:03.816807  670144 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 12:29:03.816828  670144 kubeadm.go:310] 
	I0923 12:29:03.816928  670144 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6nlf9d.x8d4dbn01qyxu2me \
	I0923 12:29:03.817053  670144 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fc29dc81bde6bbaef9ddbc91342eaa216189e2d814cc53e215aada75bebb1ff \
	I0923 12:29:03.817087  670144 kubeadm.go:310] 	--control-plane 
	I0923 12:29:03.817098  670144 kubeadm.go:310] 
	I0923 12:29:03.817208  670144 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 12:29:03.817218  670144 kubeadm.go:310] 
	I0923 12:29:03.817336  670144 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6nlf9d.x8d4dbn01qyxu2me \
	I0923 12:29:03.817491  670144 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fc29dc81bde6bbaef9ddbc91342eaa216189e2d814cc53e215aada75bebb1ff 
	I0923 12:29:03.818641  670144 kubeadm.go:310] W0923 12:28:53.480461     822 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:29:03.818988  670144 kubeadm.go:310] W0923 12:28:53.482044     822 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:29:03.819085  670144 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 12:29:03.819100  670144 cni.go:84] Creating CNI manager for ""
	I0923 12:29:03.819107  670144 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 12:29:03.821098  670144 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 12:29:03.822568  670144 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 12:29:03.832801  670144 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 12:29:03.849124  670144 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 12:29:03.849234  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:03.849289  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-052630 minikube.k8s.io/updated_at=2024_09_23T12_29_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=addons-052630 minikube.k8s.io/primary=true
	I0923 12:29:03.869073  670144 ops.go:34] apiserver oom_adj: -16
	I0923 12:29:03.987718  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:04.487902  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:04.988414  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:05.488480  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:05.988814  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:06.488344  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:06.987998  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:07.487981  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:07.987977  670144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:29:08.098139  670144 kubeadm.go:1113] duration metric: took 4.248990269s to wait for elevateKubeSystemPrivileges
	I0923 12:29:08.098178  670144 kubeadm.go:394] duration metric: took 14.818670797s to StartCluster
	I0923 12:29:08.098199  670144 settings.go:142] acquiring lock: {Name:mk3da09e51125fc906a9e1276ab490fc7b26b03f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:29:08.098319  670144 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 12:29:08.098684  670144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/kubeconfig: {Name:mk213d38080414fbe499f6509d2653fd99103348 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:29:08.098883  670144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 12:29:08.098897  670144 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:29:08.098959  670144 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 12:29:08.099099  670144 addons.go:69] Setting yakd=true in profile "addons-052630"
	I0923 12:29:08.099104  670144 config.go:182] Loaded profile config "addons-052630": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:29:08.099133  670144 addons.go:234] Setting addon yakd=true in "addons-052630"
	I0923 12:29:08.099140  670144 addons.go:69] Setting inspektor-gadget=true in profile "addons-052630"
	I0923 12:29:08.099148  670144 addons.go:69] Setting default-storageclass=true in profile "addons-052630"
	I0923 12:29:08.099155  670144 addons.go:69] Setting ingress=true in profile "addons-052630"
	I0923 12:29:08.099164  670144 addons.go:69] Setting metrics-server=true in profile "addons-052630"
	I0923 12:29:08.099174  670144 addons.go:69] Setting cloud-spanner=true in profile "addons-052630"
	I0923 12:29:08.099179  670144 addons.go:234] Setting addon ingress=true in "addons-052630"
	I0923 12:29:08.099186  670144 addons.go:234] Setting addon metrics-server=true in "addons-052630"
	I0923 12:29:08.099174  670144 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-052630"
	I0923 12:29:08.099213  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.099168  670144 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-052630"
	I0923 12:29:08.099224  670144 addons.go:69] Setting storage-provisioner=true in profile "addons-052630"
	I0923 12:29:08.099247  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.099248  670144 addons.go:234] Setting addon storage-provisioner=true in "addons-052630"
	I0923 12:29:08.099178  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.099297  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.099185  670144 addons.go:69] Setting volcano=true in profile "addons-052630"
	I0923 12:29:08.099407  670144 addons.go:234] Setting addon volcano=true in "addons-052630"
	I0923 12:29:08.099456  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.099684  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.099696  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.099705  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.099709  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.099726  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.099123  670144 addons.go:69] Setting ingress-dns=true in profile "addons-052630"
	I0923 12:29:08.099728  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.099737  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.099739  670144 addons.go:234] Setting addon ingress-dns=true in "addons-052630"
	I0923 12:29:08.099769  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.099797  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.099133  670144 addons.go:69] Setting registry=true in profile "addons-052630"
	I0923 12:29:08.099726  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.099823  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.099158  670144 addons.go:234] Setting addon inspektor-gadget=true in "addons-052630"
	I0923 12:29:08.099199  670144 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-052630"
	I0923 12:29:08.099850  670144 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-052630"
	I0923 12:29:08.099824  670144 addons.go:234] Setting addon registry=true in "addons-052630"
	I0923 12:29:08.099189  670144 addons.go:234] Setting addon cloud-spanner=true in "addons-052630"
	I0923 12:29:08.099150  670144 addons.go:69] Setting gcp-auth=true in profile "addons-052630"
	I0923 12:29:08.099904  670144 mustload.go:65] Loading cluster: addons-052630
	I0923 12:29:08.099944  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.099191  670144 addons.go:69] Setting volumesnapshots=true in profile "addons-052630"
	I0923 12:29:08.099995  670144 addons.go:234] Setting addon volumesnapshots=true in "addons-052630"
	I0923 12:29:08.100023  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.100047  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.100072  670144 config.go:182] Loaded profile config "addons-052630": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:29:08.100106  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.100108  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.100138  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.100335  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.100357  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.100427  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.100433  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.100447  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.100452  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.100507  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.100524  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.100027  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.100940  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.100978  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.099218  670144 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-052630"
	I0923 12:29:08.101095  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.101121  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.099193  670144 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-052630"
	I0923 12:29:08.101287  670144 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-052630"
	I0923 12:29:08.101320  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.101767  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.101789  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.099835  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.103920  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.110406  670144 out.go:177] * Verifying Kubernetes components...
	I0923 12:29:08.119535  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.119599  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.120427  670144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:29:08.121315  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40063
	I0923 12:29:08.131609  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43929
	I0923 12:29:08.131626  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45319
	I0923 12:29:08.131667  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.131728  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45601
	I0923 12:29:08.131769  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34841
	I0923 12:29:08.132495  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.132503  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.132728  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40563
	I0923 12:29:08.132745  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.132750  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.132759  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.133032  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.133052  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.133306  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.133386  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.133413  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.133429  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.133440  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.133482  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.133740  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.133761  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.133851  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.134081  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.134103  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.134261  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.134297  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.134429  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.134444  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.134456  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.134491  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.134545  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.134840  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.135147  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.135183  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.135520  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.135605  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.136217  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.136235  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.136747  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.137331  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.137369  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.164109  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40475
	I0923 12:29:08.164380  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45105
	I0923 12:29:08.164631  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.164825  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.165148  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.165170  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.165570  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.165782  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.165803  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.165872  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.166203  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.166826  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.166869  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.167521  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46661
	I0923 12:29:08.169501  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35785
	I0923 12:29:08.174598  670144 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-052630"
	I0923 12:29:08.178846  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.179076  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.178895  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37643
	I0923 12:29:08.178930  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36927
	I0923 12:29:08.178972  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35213
	I0923 12:29:08.178981  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46573
	I0923 12:29:08.178989  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43415
	I0923 12:29:08.179006  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38507
	I0923 12:29:08.179011  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37487
	I0923 12:29:08.180724  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.181079  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.181494  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.181522  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.181629  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.182366  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.182449  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.182465  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.182959  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.183025  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.183079  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.183168  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.183230  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.184031  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.184134  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.184154  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.184166  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.184243  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.184292  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.184307  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.184322  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.184439  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.184449  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.184993  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.185059  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.185103  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.185104  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.185125  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.185195  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.185234  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.185246  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.185293  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.185354  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35821
	I0923 12:29:08.185636  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.185676  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.186611  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.186677  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.186857  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.187550  670144 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 12:29:08.187925  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.187956  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.188199  670144 addons.go:234] Setting addon default-storageclass=true in "addons-052630"
	I0923 12:29:08.188242  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.188598  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.188651  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.188880  670144 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 12:29:08.188903  670144 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 12:29:08.188923  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.189126  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.189189  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.189258  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.189738  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.191347  670144 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 12:29:08.191425  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.193271  670144 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 12:29:08.193533  670144 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:29:08.193553  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 12:29:08.193574  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.193841  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.193953  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.194007  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.194283  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.194821  670144 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 12:29:08.194839  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 12:29:08.194858  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.195552  670144 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 12:29:08.195768  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.195845  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37407
	I0923 12:29:08.196376  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34303
	I0923 12:29:08.196521  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.196672  670144 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 12:29:08.196691  670144 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 12:29:08.196719  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.197056  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.197598  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.197684  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.197702  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.198047  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.198072  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.198113  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.198266  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.198283  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.198479  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.198489  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.198547  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:08.198664  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.198771  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.198953  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.198987  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.199210  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.199249  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.199775  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.199959  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.202164  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.202238  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.202474  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.202495  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.202578  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.202596  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.203141  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.203337  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.203517  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.203558  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.203645  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.203720  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.203863  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.203890  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.204069  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.204122  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.204301  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.204456  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.204512  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.204526  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.204686  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.204802  670144 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 12:29:08.204956  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.205170  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.205332  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.205461  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.206267  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.206285  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.206516  670144 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 12:29:08.206532  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 12:29:08.206551  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.206706  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.207377  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.207419  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.208406  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46335
	I0923 12:29:08.209619  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.210047  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.210073  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.210236  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.210426  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.210566  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.210684  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.219445  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35443
	I0923 12:29:08.219533  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46251
	I0923 12:29:08.219589  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41147
	I0923 12:29:08.220785  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36417
	I0923 12:29:08.222697  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45277
	I0923 12:29:08.225038  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39561
	I0923 12:29:08.230680  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.230751  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.231036  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.231200  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.231237  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.231376  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.231767  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.231972  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.231987  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.233085  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.233089  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.233147  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.233211  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.233227  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.233345  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.233361  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.233363  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.233373  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.233375  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.233386  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.233880  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.233899  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.233917  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.233942  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.233992  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.234058  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.234091  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.234676  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.234695  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.234731  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.234771  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.234892  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.235047  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.235091  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.235382  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.235459  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.236193  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.236849  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:08.236900  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:08.238129  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.238450  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.238525  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.238905  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:08.238923  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:08.239076  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:08.239089  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:08.239099  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:08.239108  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:08.239201  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.240929  670144 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 12:29:08.240995  670144 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 12:29:08.241278  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:08.242787  670144 main.go:141] libmachine: Failed to make call to close driver server: unexpected EOF
	I0923 12:29:08.242806  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	W0923 12:29:08.242897  670144 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0923 12:29:08.242950  670144 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 12:29:08.243197  670144 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 12:29:08.243226  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 12:29:08.243249  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.244528  670144 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 12:29:08.246261  670144 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 12:29:08.246338  670144 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 12:29:08.248195  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.248288  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.248307  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.248324  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.248538  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.248670  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.248779  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.250051  670144 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 12:29:08.250094  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 12:29:08.250119  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.251740  670144 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 12:29:08.253185  670144 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 12:29:08.253489  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.254182  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.254209  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.254598  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.254820  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.255024  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.255199  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.255972  670144 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 12:29:08.256311  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38947
	I0923 12:29:08.256884  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.256951  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
	I0923 12:29:08.257532  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.257556  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.257657  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.258214  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.258239  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.258317  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.258515  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.258635  670144 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 12:29:08.259348  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.259794  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.260013  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44129
	I0923 12:29:08.260784  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.260900  670144 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 12:29:08.261518  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.262280  670144 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 12:29:08.262305  670144 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 12:29:08.262329  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.263111  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33533
	I0923 12:29:08.263125  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.263182  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.263211  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.263259  670144 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 12:29:08.263553  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.263921  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.264090  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.264286  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.264224  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.264779  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.264968  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.266052  670144 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 12:29:08.266086  670144 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 12:29:08.266718  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.266760  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.267350  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.267376  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.267443  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.267645  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.267821  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.268028  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.268401  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:08.268717  670144 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 12:29:08.268738  670144 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 12:29:08.268757  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.269685  670144 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 12:29:08.269698  670144 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 12:29:08.270652  670144 out.go:177]   - Using image docker.io/busybox:stable
	I0923 12:29:08.271437  670144 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 12:29:08.271460  670144 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 12:29:08.271489  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.271705  670144 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 12:29:08.271764  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 12:29:08.271806  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.271995  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.272341  670144 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 12:29:08.272361  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 12:29:08.272378  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.274161  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.274186  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.274494  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.274772  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.274952  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.275114  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.275804  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.275823  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.276398  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.276424  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.276437  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.276506  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.276618  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.276764  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.276814  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.276970  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.276988  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.277148  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.277311  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.277371  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37819
	I0923 12:29:08.277484  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.277856  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:08.277961  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.278476  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.278486  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:08.278532  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:08.278534  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.278618  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.278754  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.278860  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.278893  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:08.278987  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:08.279199  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:08.280614  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	W0923 12:29:08.281601  670144 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40984->192.168.39.225:22: read: connection reset by peer
	I0923 12:29:08.281629  670144 retry.go:31] will retry after 168.892195ms: ssh: handshake failed: read tcp 192.168.39.1:40984->192.168.39.225:22: read: connection reset by peer
	I0923 12:29:08.282699  670144 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 12:29:08.283895  670144 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 12:29:08.283910  670144 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 12:29:08.283931  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:08.286545  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.286945  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:08.286960  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:08.287159  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:08.287298  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:08.287395  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:08.287501  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	W0923 12:29:08.451555  670144 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:41002->192.168.39.225:22: read: connection reset by peer
	I0923 12:29:08.451611  670144 retry.go:31] will retry after 370.404405ms: ssh: handshake failed: read tcp 192.168.39.1:41002->192.168.39.225:22: read: connection reset by peer
	I0923 12:29:08.501288  670144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:29:08.501333  670144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 12:29:08.574946  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:29:08.650848  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 12:29:08.710883  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 12:29:08.718226  670144 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 12:29:08.718254  670144 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 12:29:08.724979  670144 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 12:29:08.725012  670144 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 12:29:08.729985  670144 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 12:29:08.730007  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 12:29:08.749343  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 12:29:08.759919  670144 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 12:29:08.759951  670144 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 12:29:08.762704  670144 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 12:29:08.762725  670144 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 12:29:08.780285  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 12:29:08.797085  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 12:29:08.819576  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 12:29:08.871295  670144 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 12:29:08.871331  670144 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 12:29:08.873395  670144 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 12:29:08.873415  670144 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 12:29:08.913764  670144 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 12:29:08.913797  670144 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 12:29:08.953695  670144 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 12:29:08.953730  670144 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 12:29:08.989719  670144 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 12:29:08.989745  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 12:29:09.174275  670144 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 12:29:09.174311  670144 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 12:29:09.209701  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 12:29:09.213032  670144 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 12:29:09.213062  670144 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 12:29:09.235662  670144 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 12:29:09.235711  670144 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 12:29:09.249524  670144 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 12:29:09.249560  670144 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 12:29:09.318365  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 12:29:09.380514  670144 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 12:29:09.380546  670144 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 12:29:09.396450  670144 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 12:29:09.396479  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 12:29:09.491655  670144 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 12:29:09.491699  670144 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 12:29:09.507296  670144 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 12:29:09.507325  670144 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 12:29:09.619384  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 12:29:09.674496  670144 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 12:29:09.674532  670144 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 12:29:09.791378  670144 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 12:29:09.791409  670144 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 12:29:09.916463  670144 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 12:29:09.916518  670144 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 12:29:10.095369  670144 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 12:29:10.095403  670144 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 12:29:10.151495  670144 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 12:29:10.151529  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 12:29:10.341472  670144 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 12:29:10.341505  670144 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 12:29:10.355580  670144 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 12:29:10.355613  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 12:29:10.419301  670144 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 12:29:10.419334  670144 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 12:29:10.525480  670144 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 12:29:10.525516  670144 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 12:29:10.591491  670144 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 12:29:10.591518  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 12:29:10.598636  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 12:29:10.676043  670144 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.174707084s)
	I0923 12:29:10.676099  670144 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.174727254s)
	I0923 12:29:10.676164  670144 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0923 12:29:10.677107  670144 node_ready.go:35] waiting up to 6m0s for node "addons-052630" to be "Ready" ...
	I0923 12:29:10.681243  670144 node_ready.go:49] node "addons-052630" has status "Ready":"True"
	I0923 12:29:10.681278  670144 node_ready.go:38] duration metric: took 4.144676ms for node "addons-052630" to be "Ready" ...
	I0923 12:29:10.681290  670144 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:29:10.697913  670144 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cvw7x" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:10.820653  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 12:29:10.825588  670144 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 12:29:10.825612  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 12:29:11.166886  670144 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 12:29:11.166909  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 12:29:11.180409  670144 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-052630" context rescaled to 1 replicas
	I0923 12:29:11.447351  670144 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 12:29:11.447384  670144 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 12:29:11.721490  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 12:29:12.078341  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.427447212s)
	I0923 12:29:12.078414  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:12.078429  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:12.078443  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.503450542s)
	I0923 12:29:12.078485  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:12.078498  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:12.078823  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:12.078831  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:12.078854  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:12.078856  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:12.078863  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:12.078868  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:12.078871  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:12.078878  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:12.078891  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:12.079227  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:12.079263  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:12.079271  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:12.079315  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:12.079335  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:12.079341  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:12.803456  670144 pod_ready.go:103] pod "coredns-7c65d6cfc9-cvw7x" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:13.600807  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.889878058s)
	I0923 12:29:13.600875  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:13.600825  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.851443065s)
	I0923 12:29:13.600943  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:13.600962  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:13.600888  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:13.600895  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.820571857s)
	I0923 12:29:13.601061  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:13.601070  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:13.601238  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:13.601278  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:13.601285  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:13.601270  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:13.601304  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:13.601315  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:13.601328  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:13.601293  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:13.601389  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:13.601391  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:13.601429  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:13.601437  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:13.601449  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:13.601455  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:13.601954  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:13.602020  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:13.602042  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:13.602063  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:13.602072  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:13.602294  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:13.602306  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:13.603331  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:13.603349  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:13.801670  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:13.801695  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:13.802002  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:13.802041  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	W0923 12:29:13.802159  670144 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0923 12:29:13.880403  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:13.880433  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:13.880754  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:13.880776  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:13.880836  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:14.235264  670144 pod_ready.go:93] pod "coredns-7c65d6cfc9-cvw7x" in "kube-system" namespace has status "Ready":"True"
	I0923 12:29:14.235297  670144 pod_ready.go:82] duration metric: took 3.537339059s for pod "coredns-7c65d6cfc9-cvw7x" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:14.235308  670144 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v7dmc" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:14.291401  670144 pod_ready.go:93] pod "coredns-7c65d6cfc9-v7dmc" in "kube-system" namespace has status "Ready":"True"
	I0923 12:29:14.291428  670144 pod_ready.go:82] duration metric: took 56.113983ms for pod "coredns-7c65d6cfc9-v7dmc" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:14.291438  670144 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-052630" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:15.285912  670144 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 12:29:15.285962  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:15.289442  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:15.289901  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:15.289933  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:15.290206  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:15.290456  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:15.290643  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:15.290816  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:15.584286  670144 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 12:29:15.772056  670144 addons.go:234] Setting addon gcp-auth=true in "addons-052630"
	I0923 12:29:15.772177  670144 host.go:66] Checking if "addons-052630" exists ...
	I0923 12:29:15.772565  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:15.772604  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:15.789694  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46135
	I0923 12:29:15.790390  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:15.790928  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:15.790953  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:15.791398  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:15.791922  670144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:29:15.791974  670144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:29:15.808522  670144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43819
	I0923 12:29:15.809129  670144 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:29:15.809845  670144 main.go:141] libmachine: Using API Version  1
	I0923 12:29:15.809875  670144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:29:15.810306  670144 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:29:15.810586  670144 main.go:141] libmachine: (addons-052630) Calling .GetState
	I0923 12:29:15.812642  670144 main.go:141] libmachine: (addons-052630) Calling .DriverName
	I0923 12:29:15.812962  670144 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 12:29:15.812999  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHHostname
	I0923 12:29:15.816164  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:15.816654  670144 main.go:141] libmachine: (addons-052630) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fc:98", ip: ""} in network mk-addons-052630: {Iface:virbr1 ExpiryTime:2024-09-23 13:28:39 +0000 UTC Type:0 Mac:52:54:00:6d:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-052630 Clientid:01:52:54:00:6d:fc:98}
	I0923 12:29:15.816681  670144 main.go:141] libmachine: (addons-052630) DBG | domain addons-052630 has defined IP address 192.168.39.225 and MAC address 52:54:00:6d:fc:98 in network mk-addons-052630
	I0923 12:29:15.816904  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHPort
	I0923 12:29:15.817091  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHKeyPath
	I0923 12:29:15.817236  670144 main.go:141] libmachine: (addons-052630) Calling .GetSSHUsername
	I0923 12:29:15.817376  670144 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/addons-052630/id_rsa Username:docker}
	I0923 12:29:15.891555  670144 pod_ready.go:93] pod "etcd-addons-052630" in "kube-system" namespace has status "Ready":"True"
	I0923 12:29:15.891581  670144 pod_ready.go:82] duration metric: took 1.60013549s for pod "etcd-addons-052630" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:15.891591  670144 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-052630" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:15.987597  670144 pod_ready.go:93] pod "kube-apiserver-addons-052630" in "kube-system" namespace has status "Ready":"True"
	I0923 12:29:15.987625  670144 pod_ready.go:82] duration metric: took 96.027461ms for pod "kube-apiserver-addons-052630" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:15.987635  670144 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-052630" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:16.145156  670144 pod_ready.go:93] pod "kube-controller-manager-addons-052630" in "kube-system" namespace has status "Ready":"True"
	I0923 12:29:16.145181  670144 pod_ready.go:82] duration metric: took 157.538978ms for pod "kube-controller-manager-addons-052630" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:16.145191  670144 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vn9km" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:16.318509  670144 pod_ready.go:93] pod "kube-proxy-vn9km" in "kube-system" namespace has status "Ready":"True"
	I0923 12:29:16.318542  670144 pod_ready.go:82] duration metric: took 173.342123ms for pod "kube-proxy-vn9km" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:16.318556  670144 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-052630" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:16.367647  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.570518238s)
	I0923 12:29:16.367707  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.548102227s)
	I0923 12:29:16.367717  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.367731  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.367736  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.367751  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.367955  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.158101812s)
	I0923 12:29:16.368015  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.368031  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.368190  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.368220  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.368221  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.368223  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.368320  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.368344  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.368231  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.368372  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.368380  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.368401  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.74898188s)
	I0923 12:29:16.368253  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.368427  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.368432  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.368436  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.368440  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.368446  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.368565  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.769896333s)
	I0923 12:29:16.368589  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.368597  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.368664  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.368679  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.368279  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.368699  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.368353  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.369082  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.369131  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.369155  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.369160  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.369167  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.369173  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.369248  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.369265  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.369295  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.369301  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.369309  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.369315  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.370458  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.370480  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.370493  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.370494  670144 addons.go:475] Verifying addon registry=true in "addons-052630"
	I0923 12:29:16.370783  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.370808  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.370815  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.371296  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.371308  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.371446  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.371466  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.371473  670144 addons.go:475] Verifying addon ingress=true in "addons-052630"
	I0923 12:29:16.372129  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.053719131s)
	I0923 12:29:16.372181  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.372203  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.372468  670144 out.go:177] * Verifying registry addon...
	I0923 12:29:16.372506  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.372533  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.373064  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.373074  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:16.373084  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:16.372536  670144 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-052630 service yakd-dashboard -n yakd-dashboard
	
	I0923 12:29:16.373416  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:16.373455  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:16.373463  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:16.373482  670144 addons.go:475] Verifying addon metrics-server=true in "addons-052630"
	I0923 12:29:16.373548  670144 out.go:177] * Verifying ingress addon...
	I0923 12:29:16.376859  670144 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 12:29:16.377235  670144 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 12:29:16.403137  670144 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 12:29:16.403166  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:16.404545  670144 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 12:29:16.404577  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:16.413711  670144 pod_ready.go:93] pod "kube-scheduler-addons-052630" in "kube-system" namespace has status "Ready":"True"
	I0923 12:29:16.413735  670144 pod_ready.go:82] duration metric: took 95.170893ms for pod "kube-scheduler-addons-052630" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:16.413745  670144 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:16.687574  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.866859653s)
	W0923 12:29:16.687654  670144 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 12:29:16.687692  670144 retry.go:31] will retry after 205.184874ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 12:29:16.893570  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 12:29:17.115140  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:17.115729  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:17.396617  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:17.396842  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:17.889967  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:17.890486  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:17.896395  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.174848485s)
	I0923 12:29:17.896449  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:17.896460  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:17.896462  670144 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.083466495s)
	I0923 12:29:17.896747  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:17.896804  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:17.896821  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:17.896830  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:17.897120  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:17.897136  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:17.897147  670144 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-052630"
	I0923 12:29:17.898347  670144 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 12:29:17.898446  670144 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 12:29:17.899858  670144 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 12:29:17.900628  670144 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 12:29:17.901271  670144 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 12:29:17.901295  670144 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 12:29:17.940858  670144 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 12:29:17.940896  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:17.996704  670144 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 12:29:17.996735  670144 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 12:29:18.047586  670144 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 12:29:18.047614  670144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 12:29:18.096484  670144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 12:29:18.185732  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.292020776s)
	I0923 12:29:18.185806  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:18.185838  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:18.186138  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:18.186158  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:18.186169  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:18.186177  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:18.186426  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:18.186447  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:18.387863  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:18.388256  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:18.406385  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:18.421720  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:18.882500  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:18.882785  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:18.905191  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:19.387726  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:19.388481  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:19.411200  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:19.581790  670144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.485262596s)
	I0923 12:29:19.581873  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:19.581891  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:19.582219  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:19.582276  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:19.582301  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:19.582317  670144 main.go:141] libmachine: Making call to close driver server
	I0923 12:29:19.582328  670144 main.go:141] libmachine: (addons-052630) Calling .Close
	I0923 12:29:19.582590  670144 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:29:19.582647  670144 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:29:19.582672  670144 main.go:141] libmachine: (addons-052630) DBG | Closing plugin on server side
	I0923 12:29:19.584672  670144 addons.go:475] Verifying addon gcp-auth=true in "addons-052630"
	I0923 12:29:19.586440  670144 out.go:177] * Verifying gcp-auth addon...
	I0923 12:29:19.589206  670144 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 12:29:19.620640  670144 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 12:29:19.620668  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:19.886738  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:19.890925  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:19.912686  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:20.096746  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:20.392258  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:20.393710  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:20.407449  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:20.593567  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:20.881568  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:20.881815  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:20.905516  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:20.920340  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:21.093740  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:21.384843  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:21.384987  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:21.405282  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:21.592541  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:21.884592  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:21.885028  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:21.908345  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:22.093490  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:22.386941  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:22.387161  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:22.404796  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:22.592403  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:22.881616  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:22.881661  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:22.905343  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:23.093177  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:23.384666  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:23.386163  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:23.426576  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:23.487848  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:23.592494  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:23.882714  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:23.883358  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:23.906870  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:24.092492  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:24.382319  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:24.382983  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:24.407140  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:24.593539  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:24.882594  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:24.883125  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:24.905274  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:25.092842  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:25.382809  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:25.382812  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:25.406742  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:25.593227  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:25.884510  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:25.888982  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:25.905898  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:25.927041  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:26.093083  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:26.381626  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:26.382291  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:26.405944  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:26.592774  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:26.882136  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:26.882387  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:26.904852  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:27.093581  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:27.382186  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:27.382448  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:27.405778  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:27.593357  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:27.884042  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:27.884439  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:27.985517  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:28.092766  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:28.381805  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:28.381982  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:28.405524  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:28.424581  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:28.592693  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:28.882335  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:28.882461  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:28.905150  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:29.093790  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:29.381852  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:29.381930  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:29.406197  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:29.593870  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:29.882541  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:29.882798  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:29.905474  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:30.093606  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:30.382135  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:30.382392  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:30.404887  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:30.592667  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:30.881745  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:30.881985  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:30.907119  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:30.923733  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:31.093218  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:31.381583  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:31.381644  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:31.405219  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:31.593141  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:31.881719  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:31.882449  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:31.905985  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:32.093520  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:32.381819  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:32.382499  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:32.406447  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:32.592822  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:32.883086  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:32.883410  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:32.904975  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:33.093110  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:33.381891  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:33.383762  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:33.407942  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:33.422107  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:33.593115  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:33.881264  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:33.881728  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:33.906608  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:34.093572  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:34.381552  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:34.382128  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:34.405613  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:34.592996  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:34.882206  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:34.882652  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:34.907227  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:35.092746  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:35.381896  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:35.382256  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:35.405744  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:35.593906  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:35.882021  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:35.882250  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:35.905757  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:35.919545  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:36.093133  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:36.381087  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:36.381911  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:36.405918  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:36.593023  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:36.880871  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:36.881484  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:36.905513  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:37.093228  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:37.381359  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:37.382168  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:37.404758  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:37.592991  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:37.883706  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:37.884057  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:37.905951  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:37.921061  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:38.095579  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:38.381352  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:38.382050  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:38.406732  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:38.592418  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:38.882769  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:38.884781  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:38.909673  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:39.092517  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:39.384210  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:39.385066  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:39.405577  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:39.592411  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:39.882233  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:39.882964  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:39.905696  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:39.921969  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:40.092984  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:40.382732  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:40.383202  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:40.405785  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:40.593074  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:40.882030  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:40.882422  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:40.904994  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:41.093877  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:41.383225  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:41.383328  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:41.405996  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:41.593221  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:41.881622  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:41.881736  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:41.905316  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:42.093230  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:42.382510  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:42.382663  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:42.405377  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:42.419518  670144 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"False"
	I0923 12:29:42.592420  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:42.880988  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:42.881203  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:42.906415  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:43.092742  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:43.382514  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:43.383733  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:43.719884  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:43.720755  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:43.888232  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:43.889178  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:43.904914  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:44.094101  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:44.383060  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:44.383829  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:44.405971  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:44.592595  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:44.887366  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:44.887955  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:44.906306  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:44.922735  670144 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace has status "Ready":"True"
	I0923 12:29:44.922765  670144 pod_ready.go:82] duration metric: took 28.50901084s for pod "nvidia-device-plugin-daemonset-fhnrr" in "kube-system" namespace to be "Ready" ...
	I0923 12:29:44.922773  670144 pod_ready.go:39] duration metric: took 34.241469342s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:29:44.922792  670144 api_server.go:52] waiting for apiserver process to appear ...
	I0923 12:29:44.922851  670144 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:29:44.942826  670144 api_server.go:72] duration metric: took 36.843890873s to wait for apiserver process to appear ...
	I0923 12:29:44.942854  670144 api_server.go:88] waiting for apiserver healthz status ...
	I0923 12:29:44.942876  670144 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I0923 12:29:44.947699  670144 api_server.go:279] https://192.168.39.225:8443/healthz returned 200:
	ok
	I0923 12:29:44.948883  670144 api_server.go:141] control plane version: v1.31.1
	I0923 12:29:44.948908  670144 api_server.go:131] duration metric: took 6.047956ms to wait for apiserver health ...
	I0923 12:29:44.948917  670144 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 12:29:44.958208  670144 system_pods.go:59] 17 kube-system pods found
	I0923 12:29:44.958245  670144 system_pods.go:61] "coredns-7c65d6cfc9-cvw7x" [3de8bd3c-0baf-459b-94f8-f5d52ef1286d] Running
	I0923 12:29:44.958253  670144 system_pods.go:61] "csi-hostpath-attacher-0" [4c3e1f51-c4eb-4fa0-ab09-335efd2aa843] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 12:29:44.958259  670144 system_pods.go:61] "csi-hostpath-resizer-0" [e4676deb-26a8-4a3c-87ac-a226db6563ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 12:29:44.958271  670144 system_pods.go:61] "csi-hostpathplugin-jd2lw" [feb3c94a-858a-4f61-a148-8b64dcfd0934] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 12:29:44.958276  670144 system_pods.go:61] "etcd-addons-052630" [ecb6248b-7e04-4747-946a-eb8fc976147e] Running
	I0923 12:29:44.958280  670144 system_pods.go:61] "kube-apiserver-addons-052630" [578f26c5-733e-4d3b-85da-ecade8aa52dd] Running
	I0923 12:29:44.958284  670144 system_pods.go:61] "kube-controller-manager-addons-052630" [55212af5-b2df-4621-a846-c8912549238d] Running
	I0923 12:29:44.958288  670144 system_pods.go:61] "kube-ingress-dns-minikube" [2187b5c3-511a-4aab-a372-f66d680bbf18] Running
	I0923 12:29:44.958291  670144 system_pods.go:61] "kube-proxy-vn9km" [0e10d00e-8de3-4f7e-ab59-d0f9e93b2f00] Running
	I0923 12:29:44.958295  670144 system_pods.go:61] "kube-scheduler-addons-052630" [a180218d-c5e9-4947-b527-7f9570b9c578] Running
	I0923 12:29:44.958300  670144 system_pods.go:61] "metrics-server-84c5f94fbc-2rhln" [e7c5ceb3-389e-43ff-b807-718f23f12b0f] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 12:29:44.958304  670144 system_pods.go:61] "nvidia-device-plugin-daemonset-fhnrr" [8455a016-6ce8-40d4-bd64-ec3d2e30f774] Running
	I0923 12:29:44.958310  670144 system_pods.go:61] "registry-66c9cd494c-srklj" [ca56f86a-1049-47d9-b11b-9f492f1f0e5a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 12:29:44.958314  670144 system_pods.go:61] "registry-proxy-xmmdr" [cf74bb33-75e5-4844-a3a8-fc698241ea5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 12:29:44.958320  670144 system_pods.go:61] "snapshot-controller-56fcc65765-76p2p" [20745ac3-21a3-45a6-8861-c0ba3567f38a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 12:29:44.958325  670144 system_pods.go:61] "snapshot-controller-56fcc65765-pzghc" [e4692d57-c84d-4bf1-bace-9d6a5a95d95e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 12:29:44.958331  670144 system_pods.go:61] "storage-provisioner" [3bc488f6-aa39-42bc-a0f5-173b2d7e07cf] Running
	I0923 12:29:44.958338  670144 system_pods.go:74] duration metric: took 9.414655ms to wait for pod list to return data ...
	I0923 12:29:44.958347  670144 default_sa.go:34] waiting for default service account to be created ...
	I0923 12:29:44.961083  670144 default_sa.go:45] found service account: "default"
	I0923 12:29:44.961109  670144 default_sa.go:55] duration metric: took 2.755138ms for default service account to be created ...
	I0923 12:29:44.961119  670144 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 12:29:44.967937  670144 system_pods.go:86] 17 kube-system pods found
	I0923 12:29:44.967979  670144 system_pods.go:89] "coredns-7c65d6cfc9-cvw7x" [3de8bd3c-0baf-459b-94f8-f5d52ef1286d] Running
	I0923 12:29:44.967993  670144 system_pods.go:89] "csi-hostpath-attacher-0" [4c3e1f51-c4eb-4fa0-ab09-335efd2aa843] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 12:29:44.968001  670144 system_pods.go:89] "csi-hostpath-resizer-0" [e4676deb-26a8-4a3c-87ac-a226db6563ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 12:29:44.968012  670144 system_pods.go:89] "csi-hostpathplugin-jd2lw" [feb3c94a-858a-4f61-a148-8b64dcfd0934] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 12:29:44.968018  670144 system_pods.go:89] "etcd-addons-052630" [ecb6248b-7e04-4747-946a-eb8fc976147e] Running
	I0923 12:29:44.968024  670144 system_pods.go:89] "kube-apiserver-addons-052630" [578f26c5-733e-4d3b-85da-ecade8aa52dd] Running
	I0923 12:29:44.968029  670144 system_pods.go:89] "kube-controller-manager-addons-052630" [55212af5-b2df-4621-a846-c8912549238d] Running
	I0923 12:29:44.968037  670144 system_pods.go:89] "kube-ingress-dns-minikube" [2187b5c3-511a-4aab-a372-f66d680bbf18] Running
	I0923 12:29:44.968051  670144 system_pods.go:89] "kube-proxy-vn9km" [0e10d00e-8de3-4f7e-ab59-d0f9e93b2f00] Running
	I0923 12:29:44.968057  670144 system_pods.go:89] "kube-scheduler-addons-052630" [a180218d-c5e9-4947-b527-7f9570b9c578] Running
	I0923 12:29:44.968066  670144 system_pods.go:89] "metrics-server-84c5f94fbc-2rhln" [e7c5ceb3-389e-43ff-b807-718f23f12b0f] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 12:29:44.968073  670144 system_pods.go:89] "nvidia-device-plugin-daemonset-fhnrr" [8455a016-6ce8-40d4-bd64-ec3d2e30f774] Running
	I0923 12:29:44.968088  670144 system_pods.go:89] "registry-66c9cd494c-srklj" [ca56f86a-1049-47d9-b11b-9f492f1f0e5a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 12:29:44.968100  670144 system_pods.go:89] "registry-proxy-xmmdr" [cf74bb33-75e5-4844-a3a8-fc698241ea5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 12:29:44.968112  670144 system_pods.go:89] "snapshot-controller-56fcc65765-76p2p" [20745ac3-21a3-45a6-8861-c0ba3567f38a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 12:29:44.968131  670144 system_pods.go:89] "snapshot-controller-56fcc65765-pzghc" [e4692d57-c84d-4bf1-bace-9d6a5a95d95e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 12:29:44.968136  670144 system_pods.go:89] "storage-provisioner" [3bc488f6-aa39-42bc-a0f5-173b2d7e07cf] Running
	I0923 12:29:44.968149  670144 system_pods.go:126] duration metric: took 7.021444ms to wait for k8s-apps to be running ...
	I0923 12:29:44.968165  670144 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 12:29:44.968233  670144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:29:44.984699  670144 system_svc.go:56] duration metric: took 16.527101ms WaitForService to wait for kubelet
	I0923 12:29:44.984736  670144 kubeadm.go:582] duration metric: took 36.885810437s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:29:44.984757  670144 node_conditions.go:102] verifying NodePressure condition ...
	I0923 12:29:44.987925  670144 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:29:44.987958  670144 node_conditions.go:123] node cpu capacity is 2
	I0923 12:29:44.987971  670144 node_conditions.go:105] duration metric: took 3.209178ms to run NodePressure ...
	I0923 12:29:44.987984  670144 start.go:241] waiting for startup goroutines ...
	I0923 12:29:45.092993  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:45.381916  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:45.382878  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:45.405371  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:45.592889  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:45.882961  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:45.882986  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:45.905772  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:46.094099  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:46.381480  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:46.381480  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:46.405345  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:46.593680  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:46.881522  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:46.881585  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:46.907463  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:47.092649  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:47.381289  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:47.382803  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:47.404633  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:47.593242  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:47.881017  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:47.881741  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:47.905476  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:48.094283  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:48.381287  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:48.381678  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:48.404848  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:48.593290  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:49.182575  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:49.182862  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:49.183278  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:49.183600  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:49.387493  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:49.387949  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:49.409172  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:49.593041  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:49.881864  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:49.882012  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:49.905486  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:50.093223  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:50.381524  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:50.381911  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:50.405382  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:50.593121  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:50.882078  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:50.882130  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:50.904664  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:51.094395  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:51.381785  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:51.382965  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:51.404814  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:51.593466  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:51.881718  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:51.882182  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:51.906271  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:52.093535  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:52.381560  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:52.382447  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:52.483055  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:52.592715  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:52.882614  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:52.882831  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:52.905337  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:53.099377  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:53.382358  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:53.382434  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:53.405014  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:53.593255  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:53.881701  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:53.882109  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:53.905214  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:54.093317  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:54.381400  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:54.381756  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:54.405603  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:54.593298  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:54.881505  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:29:54.882280  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:54.905352  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:55.096080  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:55.381500  670144 kapi.go:107] duration metric: took 39.004256174s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 12:29:55.382262  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:55.407177  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:55.593060  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:55.881873  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:55.906292  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:56.095168  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:56.467534  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:56.467800  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:56.593413  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:56.881611  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:56.905852  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:57.093199  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:57.380555  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:57.407044  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:57.821632  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:57.881537  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:57.906086  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:58.093251  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:58.381225  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:58.405370  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:58.592999  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:58.882363  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:58.905848  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:59.092799  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:59.381850  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:59.405243  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:29:59.592647  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:29:59.883180  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:29:59.905462  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:00.093783  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:00.381525  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:00.405496  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:00.593067  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:00.882096  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:00.905415  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:01.093248  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:01.381090  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:01.404657  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:01.592915  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:01.881472  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:01.904650  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:02.094989  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:02.381519  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:02.482813  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:02.592969  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:02.881994  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:02.905592  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:03.092833  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:03.382442  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:03.737000  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:03.737731  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:03.881239  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:03.908549  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:04.092952  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:04.382596  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:04.406348  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:04.592523  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:04.882260  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:04.906335  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:05.093281  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:05.381532  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:05.404962  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:05.593867  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:05.881533  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:05.905611  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:06.092910  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:06.382350  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:06.405359  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:06.592970  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:06.881573  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:06.905700  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:07.093261  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:07.383765  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:07.406221  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:07.593359  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:07.881515  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:07.905283  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:08.094381  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:08.436545  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:08.437214  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:08.595352  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:08.881471  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:08.904728  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:09.094082  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:09.382329  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:09.418347  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:09.592417  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:09.882579  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:09.905086  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:10.093585  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:10.381916  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:10.408107  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:10.593205  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:10.881583  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:10.906213  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:11.092377  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:11.381528  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:11.405175  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:11.593188  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:11.881123  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:11.906575  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:12.093361  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:12.381510  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:12.418229  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:12.594390  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:12.883421  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:12.905655  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:13.093231  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:13.380738  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:13.409871  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:13.592706  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:13.881963  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:13.906221  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:14.092914  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:14.382057  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:14.405898  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:14.593405  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:14.883241  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:14.905532  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:15.092900  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:15.381659  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:15.404674  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:15.595837  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:15.884204  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:15.906723  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:16.096714  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:16.398360  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:16.492006  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:16.593666  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:16.886491  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:16.907334  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:17.105994  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:17.383325  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:17.406532  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:17.592593  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:17.881884  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:17.906107  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:18.098950  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:18.382178  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:18.406919  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:18.593795  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:18.881986  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:18.907032  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:19.093203  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:19.385652  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:19.486193  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:19.593670  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:20.158045  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:20.160442  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:20.160600  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:20.381193  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:20.406353  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:20.592767  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:20.881653  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:20.906233  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:21.092756  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:21.381504  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:21.404711  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:21.593682  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:21.882663  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:21.905651  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:22.094019  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:22.381116  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:22.482594  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:22.593429  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:22.882120  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:22.907262  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:23.093012  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:23.381337  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:23.416798  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:23.605942  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:23.883914  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:23.905484  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:24.092422  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:24.382490  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:24.404543  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:24.593615  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:24.882704  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:24.905157  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:25.092234  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:25.381913  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:25.406353  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:25.593550  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:25.881420  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:25.905759  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:26.092760  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:26.382791  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:26.404663  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:26.593511  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:26.881695  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:26.906109  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:27.092908  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:27.381352  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:27.405542  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:27.593292  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:27.881677  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:27.905877  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:28.093483  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:28.381903  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:28.405916  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:28.596909  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:28.883234  670144 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:30:28.907825  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:29.093630  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:29.384206  670144 kapi.go:107] duration metric: took 1m13.007346283s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 12:30:29.408031  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:29.593154  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:29.905366  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:30.096542  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:30.407476  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:30.593391  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:30.905711  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:31.093234  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:31.406100  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:31.593583  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:31.905683  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:32.093451  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:32.405762  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:32.593457  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:32.906615  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:33.092949  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:33.405990  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:33.593662  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:33.908125  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:34.095552  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:30:34.410315  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:34.593641  670144 kapi.go:107] duration metric: took 1m15.004433334s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 12:30:34.596145  670144 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-052630 cluster.
	I0923 12:30:34.597867  670144 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 12:30:34.599357  670144 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 12:30:34.905455  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:35.406462  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:35.906240  670144 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:30:36.408440  670144 kapi.go:107] duration metric: took 1m18.507800959s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 12:30:36.410763  670144 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, default-storageclass, ingress-dns, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0923 12:30:36.412731  670144 addons.go:510] duration metric: took 1m28.313766491s for enable addons: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin default-storageclass ingress-dns inspektor-gadget metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0923 12:30:36.412794  670144 start.go:246] waiting for cluster config update ...
	I0923 12:30:36.412829  670144 start.go:255] writing updated cluster config ...
	I0923 12:30:36.413342  670144 ssh_runner.go:195] Run: rm -f paused
	I0923 12:30:36.467246  670144 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 12:30:36.469473  670144 out.go:177] * Done! kubectl is now configured to use "addons-052630" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 23 12:44:34 addons-052630 crio[664]: time="2024-09-23 12:44:34.046079352Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3f708a6-2e2d-46f0-9151-636be80f983c name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 12:44:34 addons-052630 crio[664]: time="2024-09-23 12:44:34.046352066Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a089a3781add8595157385aad0947e2ea4b8c1571897261093173772dbd4029e,PodSandboxId:f8ba55a3e9041e3657843b6ffc7ffd919779e5373e2065f582f9201f5dbf0774,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727095295770795736,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-qzcw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0b254feb-b4af-4f12-9e52-a816f5d00bac,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd658c0598e6c49415ca300ec19c8efc652697d90ca659d5332bd0cc8f9da0ce,PodSandboxId:e9d41568c174048781bd2e547ce07b9b7f13bd648556c363403a06a7374416ad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727095155775653048,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 487480e4-f024-4e3c-9c18-a9aabd6129fb,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c427e0695fa7dfe118179b0685857c7d96bbed4dca69a80b42715eb28daf3f3,PodSandboxId:e0f536b5e92b1765bbec31f330b1cbfc55061818c897748a2f248d41719fbcd7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727094633948657283,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-gzksd,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 1b75c160-3198-402b-b135-861e77ac4482,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50f1ae050ce475e5a505a980ea72122b45036c60002591f0381f922671fc411a,PodSandboxId:17d85166b8277c2a9faa6b4607652c23931a05692eb0e979f495fa4c4552c2f9,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1727094606636049364,Labels:map[string]string{io.kubernetes.container.name: l
ocal-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-snqv8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 43c09017-cfad-4a08-b73c-bfba508afe73,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c2b9200f7a37ef1e8ff5b91ed0bd719859f18fd8e04d31045255bb46a563b5,PodSandboxId:dfa6385e052b942da39e7f1efb907744acba0e7c89c40514021b4c90d419d7bc,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727094558
710109886,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2rhln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7c5ceb3-389e-43ff-b807-718f23f12b0f,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58bbd55bde08fee5d7aeb446829fa511ea633c6594a8f94dbc19f40954380b59,PodSandboxId:7fc2b63648c6ce7f74862f514ca11336f589ba36807a84f82b5fe966e703bba1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db300
2f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727094554932322734,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc488f6-aa39-42bc-a0f5-173b2d7e07cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2700e6a975e0821a451d1a3a41fc665ed1652d4380515018e498434fe7a5a0ff,PodSandboxId:f5725c70d12571297f1fbc08fcf7c6634ea79b711270178cb2861d7a021f4a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727094551725672407,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cvw7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3de8bd3c-0baf-459b-94f8-f5d52ef1286d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f2e68fe054158153cd0c8a69f419c5737179e35fdb015065c2b0c5026242a00,PodSandboxId:d54027fa53db00e856f587b7398dfbee79868ce10d8c9bc030a174a635717867,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727094549016200714,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vn9km,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e10d00e-8de3-4f7e-ab59-d0f9e93b2f00,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d98809372a261156c26bb6e7875a9195290bc295be13167b14faf4bcfd7ac5a,PodSandboxId:1a45969da935e2684242fa5b07b35eaa8001d3fe9d4867c4f31f2152672a0eea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575ee
d91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727094538170986390,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd793e50c81059d44a1e6fde8a448895,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:137997c74feadea0b206e40066df0bab268bc86a43379e84dcea2cf1d5c37c85,PodSandboxId:8618182b0365790203283b2a6cd2de064a98724d33806cc9f4eedfc629ad8516,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904
b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727094538165838825,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7efdfb9180b7292c18423e02021138d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84885d234fc5d6c19b12360b7a7ed082cccb20946dcbedee5d7e8756cd36ffb0,PodSandboxId:2f48abf774e208d8f1e5e0d05f63bfa69400ab9e4bb0147be37e97f07eed1343,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca50
48cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727094538113594059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1947c799ac122c11eb2c15f2bc9fdc08,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b706da2e61377c7ed468c79a4331b242c0011823c88614c8bc039cc285976d81,PodSandboxId:a16e26d2dc6966551d559c1a5d3db6a99724044ad4418a767d04c065c600a61d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727094538130237781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c71f38e20d8cf8d860ac88cdd9241f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3f708a6-2e2d-46f0-9151-636be80f983c name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 12:44:34 addons-052630 crio[664]: time="2024-09-23 12:44:34.083286704Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6af9a782-92b7-46a7-9bee-72c902bf7c18 name=/runtime.v1.RuntimeService/Version
	Sep 23 12:44:34 addons-052630 crio[664]: time="2024-09-23 12:44:34.083419241Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6af9a782-92b7-46a7-9bee-72c902bf7c18 name=/runtime.v1.RuntimeService/Version
	Sep 23 12:44:34 addons-052630 crio[664]: time="2024-09-23 12:44:34.088462868Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f9ca2de1-0664-4d96-9bad-201e29674194 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 12:44:34 addons-052630 crio[664]: time="2024-09-23 12:44:34.091934627Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727095474091906937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9ca2de1-0664-4d96-9bad-201e29674194 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 12:44:34 addons-052630 crio[664]: time="2024-09-23 12:44:34.092724897Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4f01668-8d11-4b13-a67a-f485ec9f8f13 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 12:44:34 addons-052630 crio[664]: time="2024-09-23 12:44:34.092789523Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4f01668-8d11-4b13-a67a-f485ec9f8f13 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 12:44:34 addons-052630 crio[664]: time="2024-09-23 12:44:34.093064693Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a089a3781add8595157385aad0947e2ea4b8c1571897261093173772dbd4029e,PodSandboxId:f8ba55a3e9041e3657843b6ffc7ffd919779e5373e2065f582f9201f5dbf0774,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727095295770795736,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-qzcw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0b254feb-b4af-4f12-9e52-a816f5d00bac,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd658c0598e6c49415ca300ec19c8efc652697d90ca659d5332bd0cc8f9da0ce,PodSandboxId:e9d41568c174048781bd2e547ce07b9b7f13bd648556c363403a06a7374416ad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727095155775653048,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 487480e4-f024-4e3c-9c18-a9aabd6129fb,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c427e0695fa7dfe118179b0685857c7d96bbed4dca69a80b42715eb28daf3f3,PodSandboxId:e0f536b5e92b1765bbec31f330b1cbfc55061818c897748a2f248d41719fbcd7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727094633948657283,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-gzksd,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 1b75c160-3198-402b-b135-861e77ac4482,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50f1ae050ce475e5a505a980ea72122b45036c60002591f0381f922671fc411a,PodSandboxId:17d85166b8277c2a9faa6b4607652c23931a05692eb0e979f495fa4c4552c2f9,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1727094606636049364,Labels:map[string]string{io.kubernetes.container.name: l
ocal-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-snqv8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 43c09017-cfad-4a08-b73c-bfba508afe73,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c2b9200f7a37ef1e8ff5b91ed0bd719859f18fd8e04d31045255bb46a563b5,PodSandboxId:dfa6385e052b942da39e7f1efb907744acba0e7c89c40514021b4c90d419d7bc,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727094558
710109886,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2rhln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7c5ceb3-389e-43ff-b807-718f23f12b0f,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58bbd55bde08fee5d7aeb446829fa511ea633c6594a8f94dbc19f40954380b59,PodSandboxId:7fc2b63648c6ce7f74862f514ca11336f589ba36807a84f82b5fe966e703bba1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db300
2f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727094554932322734,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc488f6-aa39-42bc-a0f5-173b2d7e07cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2700e6a975e0821a451d1a3a41fc665ed1652d4380515018e498434fe7a5a0ff,PodSandboxId:f5725c70d12571297f1fbc08fcf7c6634ea79b711270178cb2861d7a021f4a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727094551725672407,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cvw7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3de8bd3c-0baf-459b-94f8-f5d52ef1286d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f2e68fe054158153cd0c8a69f419c5737179e35fdb015065c2b0c5026242a00,PodSandboxId:d54027fa53db00e856f587b7398dfbee79868ce10d8c9bc030a174a635717867,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727094549016200714,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vn9km,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e10d00e-8de3-4f7e-ab59-d0f9e93b2f00,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d98809372a261156c26bb6e7875a9195290bc295be13167b14faf4bcfd7ac5a,PodSandboxId:1a45969da935e2684242fa5b07b35eaa8001d3fe9d4867c4f31f2152672a0eea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575ee
d91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727094538170986390,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd793e50c81059d44a1e6fde8a448895,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:137997c74feadea0b206e40066df0bab268bc86a43379e84dcea2cf1d5c37c85,PodSandboxId:8618182b0365790203283b2a6cd2de064a98724d33806cc9f4eedfc629ad8516,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904
b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727094538165838825,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7efdfb9180b7292c18423e02021138d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84885d234fc5d6c19b12360b7a7ed082cccb20946dcbedee5d7e8756cd36ffb0,PodSandboxId:2f48abf774e208d8f1e5e0d05f63bfa69400ab9e4bb0147be37e97f07eed1343,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca50
48cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727094538113594059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1947c799ac122c11eb2c15f2bc9fdc08,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b706da2e61377c7ed468c79a4331b242c0011823c88614c8bc039cc285976d81,PodSandboxId:a16e26d2dc6966551d559c1a5d3db6a99724044ad4418a767d04c065c600a61d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727094538130237781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c71f38e20d8cf8d860ac88cdd9241f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f4f01668-8d11-4b13-a67a-f485ec9f8f13 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 12:44:34 addons-052630 crio[664]: time="2024-09-23 12:44:34.111665083Z" level=debug msg="Request: &ListImagesRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=ed9fda8e-774c-439e-8a96-329c996a3589 name=/runtime.v1.ImageService/ListImages
	Sep 23 12:44:34 addons-052630 crio[664]: time="2024-09-23 12:44:34.112808546Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,RepoTags:[registry.k8s.io/kube-apiserver:v1.31.1],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771 registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb],Size_:95237600,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.1],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1 registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748],Size_:89437508,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image
{Id:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,RepoTags:[registry.k8s.io/kube-scheduler:v1.31.1],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0 registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8],Size_:68420934,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,RepoTags:[registry.k8s.io/kube-proxy:v1.31.1],RepoDigests:[registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44 registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2],Size_:92733849,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,RepoTags:[registry.k8s.io/pause:3.10],RepoDigests:[registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a],Size_:742080,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,Pinned:true,},&Image{Id:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,RepoTags:[registry.k8s.io/etcd:3.5.15-0],RepoDigests:[registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a],Size_:149009664,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.3],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50],Size_:63273227,Uid:nil,Username:nonroot,Spec:nil,Pinned:false,},&Image{Id:6e38f40d628db3002f5617342c8872c935de530
d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,RepoTags:[docker.io/kindest/kindnetd:v20240813-c6f155d6],RepoDigests:[docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166],Size_:87190579,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,RepoTags:[],RepoDigests:[registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a registry.k8s.io/metrics-server
/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9],Size_:68126408,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},&Image{Id:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,RepoTags:[],RepoDigests:[gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c],Size_:202780266,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:d2ebbaeeba1bd01a80097b8a834ff2a86498d89f3ea11470c0f0ba298931b7cb,RepoTags:[],RepoDigests:[gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed5296669595f8a7b2d79ad0cd8e193bf gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e],Size_:131068228,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace
3e,RepoTags:[],RepoDigests:[nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47 nvcr.io/nvidia/k8s-device-plugin@sha256:fe3da09abe6509c2200f29049ac3ae5ff1277d9653972b4b391348655f8cd944],Size_:354405291,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,RepoTags:[],RepoDigests:[gcr.io/k8s-minikube/kube-registry-proxy@sha256:08dc5a48792f971b401d3758d4f37fd4af18aa2881668d65fa2c0b3bc61d7af4 gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367],Size_:190875476,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:75ef5b734af47dc41ff2fb442f287ee08c7da31dddb3759616a8f693f0f346a0,RepoTags:[],RepoDigests:[docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7 docker.io/library/registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90],Size_:26002343,Uid:nil,Username:,Spec:nil,Pin
ned:false,},&Image{Id:195d612ae7722fdfec0d582d74fde7db062c1655b60737ceedb14cd627d0d601,RepoTags:[],RepoDigests:[ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec ghcr.io/inspektor-gadget/inspektor-gadget@sha256:80a3bcbb29ca0fd2aae79ec8aad1e690dd02c7616a34e723a03fd5160888135c],Size_:176758647,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,RepoTags:[],RepoDigests:[docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd docker.io/marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624],Size_:205121029,Uid:&Int64Value{Value:10001,},Username:,Spec:nil,Pinned:false,},&Image{Id:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,RepoTags:[],RepoDigests:[docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef docker.io/rancher/loc
al-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246],Size_:35264960,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8 registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7],Size_:57899101,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864 registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c],Size_:56980232,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:ce263a8653
f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,RepoTags:[],RepoDigests:[registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3],Size_:55800714,Uid:&Int64Value{Value:65532,},Username:,Spec:nil,Pinned:false,},&Image{Id:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922 registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280],Size_:54632579,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d
10519d9bf0 registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b],Size_:57303140,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c],Size_:21521620,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11 registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5],Size_:37200280,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{I
d:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6 registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0],Size_:19577497,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,RepoTags:[],RepoDigests:[registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6 registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce],Size_:288720505,Uid:nil,Username:www-data,Spec:nil,Pinned:false,},&Image{Id:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7 registry.k8s.io/sig
-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8],Size_:60675705,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,RepoTags:[],RepoDigests:[gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb],Size_:50809022,Uid:&Int64Value{Value:65532,},Username:,Spec:nil,Pinned:false,},&Image{Id:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5],Size_:57410185,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:41eb637aa779284762db7a79fac77894d8f
c6d967404e9c7f0760cb4c97a4766,RepoTags:[],RepoDigests:[ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a ghcr.io/headlamp-k8s/headlamp@sha256:8825bb13459c64dcf9503d836b94b49c97dc831aff7c325a6eed68961388cf9c],Size_:187915653,Uid:nil,Username:headlamp,Spec:nil,Pinned:false,},&Image{Id:39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3 docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e],Size_:191853369,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,RepoTags:[docker.io/library/nginx:alpine],RepoDigests:[docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f9
7dcf],Size_:44647101,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,RepoTags:[],RepoDigests:[docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79],Size_:4497096,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,RepoTags:[docker.io/library/busybox:stable],RepoDigests:[docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f docker.io/library/busybox@sha256:c230832bd3b0be59a6c47ed64294f9ce71e91b327957920b6929a0caa8353140],Size_:4507152,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104da
cd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,Pinned:false,},},}" file="otel-collector/interceptors.go:74" id=ed9fda8e-774c-439e-8a96-329c996a3589 name=/runtime.v1.ImageService/ListImages
	Sep 23 12:44:34 addons-052630 crio[664]: time="2024-09-23 12:44:34.129054003Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c405614-b514-43f3-b638-f09166004dc9 name=/runtime.v1.RuntimeService/Version
	Sep 23 12:44:34 addons-052630 crio[664]: time="2024-09-23 12:44:34.129130774Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c405614-b514-43f3-b638-f09166004dc9 name=/runtime.v1.RuntimeService/Version
	Sep 23 12:44:34 addons-052630 crio[664]: time="2024-09-23 12:44:34.130312187Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b367284d-d868-430c-8597-15619ea7b568 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 12:44:34 addons-052630 crio[664]: time="2024-09-23 12:44:34.131417763Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727095474131393262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b367284d-d868-430c-8597-15619ea7b568 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 12:44:34 addons-052630 crio[664]: time="2024-09-23 12:44:34.131887676Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8473a68-e61c-42f3-a6b5-aa989a984d75 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 12:44:34 addons-052630 crio[664]: time="2024-09-23 12:44:34.131967027Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b8473a68-e61c-42f3-a6b5-aa989a984d75 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 12:44:34 addons-052630 crio[664]: time="2024-09-23 12:44:34.132267518Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a089a3781add8595157385aad0947e2ea4b8c1571897261093173772dbd4029e,PodSandboxId:f8ba55a3e9041e3657843b6ffc7ffd919779e5373e2065f582f9201f5dbf0774,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727095295770795736,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-qzcw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0b254feb-b4af-4f12-9e52-a816f5d00bac,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd658c0598e6c49415ca300ec19c8efc652697d90ca659d5332bd0cc8f9da0ce,PodSandboxId:e9d41568c174048781bd2e547ce07b9b7f13bd648556c363403a06a7374416ad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727095155775653048,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 487480e4-f024-4e3c-9c18-a9aabd6129fb,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c427e0695fa7dfe118179b0685857c7d96bbed4dca69a80b42715eb28daf3f3,PodSandboxId:e0f536b5e92b1765bbec31f330b1cbfc55061818c897748a2f248d41719fbcd7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727094633948657283,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-gzksd,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 1b75c160-3198-402b-b135-861e77ac4482,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50f1ae050ce475e5a505a980ea72122b45036c60002591f0381f922671fc411a,PodSandboxId:17d85166b8277c2a9faa6b4607652c23931a05692eb0e979f495fa4c4552c2f9,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1727094606636049364,Labels:map[string]string{io.kubernetes.container.name: l
ocal-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-snqv8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 43c09017-cfad-4a08-b73c-bfba508afe73,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c2b9200f7a37ef1e8ff5b91ed0bd719859f18fd8e04d31045255bb46a563b5,PodSandboxId:dfa6385e052b942da39e7f1efb907744acba0e7c89c40514021b4c90d419d7bc,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727094558
710109886,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2rhln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7c5ceb3-389e-43ff-b807-718f23f12b0f,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58bbd55bde08fee5d7aeb446829fa511ea633c6594a8f94dbc19f40954380b59,PodSandboxId:7fc2b63648c6ce7f74862f514ca11336f589ba36807a84f82b5fe966e703bba1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db300
2f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727094554932322734,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc488f6-aa39-42bc-a0f5-173b2d7e07cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2700e6a975e0821a451d1a3a41fc665ed1652d4380515018e498434fe7a5a0ff,PodSandboxId:f5725c70d12571297f1fbc08fcf7c6634ea79b711270178cb2861d7a021f4a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727094551725672407,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cvw7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3de8bd3c-0baf-459b-94f8-f5d52ef1286d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f2e68fe054158153cd0c8a69f419c5737179e35fdb015065c2b0c5026242a00,PodSandboxId:d54027fa53db00e856f587b7398dfbee79868ce10d8c9bc030a174a635717867,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727094549016200714,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vn9km,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e10d00e-8de3-4f7e-ab59-d0f9e93b2f00,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d98809372a261156c26bb6e7875a9195290bc295be13167b14faf4bcfd7ac5a,PodSandboxId:1a45969da935e2684242fa5b07b35eaa8001d3fe9d4867c4f31f2152672a0eea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575ee
d91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727094538170986390,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd793e50c81059d44a1e6fde8a448895,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:137997c74feadea0b206e40066df0bab268bc86a43379e84dcea2cf1d5c37c85,PodSandboxId:8618182b0365790203283b2a6cd2de064a98724d33806cc9f4eedfc629ad8516,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904
b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727094538165838825,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7efdfb9180b7292c18423e02021138d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84885d234fc5d6c19b12360b7a7ed082cccb20946dcbedee5d7e8756cd36ffb0,PodSandboxId:2f48abf774e208d8f1e5e0d05f63bfa69400ab9e4bb0147be37e97f07eed1343,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca50
48cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727094538113594059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1947c799ac122c11eb2c15f2bc9fdc08,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b706da2e61377c7ed468c79a4331b242c0011823c88614c8bc039cc285976d81,PodSandboxId:a16e26d2dc6966551d559c1a5d3db6a99724044ad4418a767d04c065c600a61d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727094538130237781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c71f38e20d8cf8d860ac88cdd9241f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b8473a68-e61c-42f3-a6b5-aa989a984d75 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 12:44:34 addons-052630 crio[664]: time="2024-09-23 12:44:34.167225809Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a4368ce-a4f0-48fc-ae6d-7de6b80cd89c name=/runtime.v1.RuntimeService/Version
	Sep 23 12:44:34 addons-052630 crio[664]: time="2024-09-23 12:44:34.167298329Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a4368ce-a4f0-48fc-ae6d-7de6b80cd89c name=/runtime.v1.RuntimeService/Version
	Sep 23 12:44:34 addons-052630 crio[664]: time="2024-09-23 12:44:34.168601999Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cbba92e3-c096-49f3-8689-df9483e277b9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 12:44:34 addons-052630 crio[664]: time="2024-09-23 12:44:34.170386561Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727095474170358110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cbba92e3-c096-49f3-8689-df9483e277b9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 12:44:34 addons-052630 crio[664]: time="2024-09-23 12:44:34.170877690Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ebfff54-2d45-42ff-8a6a-577a691fed90 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 12:44:34 addons-052630 crio[664]: time="2024-09-23 12:44:34.170954454Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ebfff54-2d45-42ff-8a6a-577a691fed90 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 12:44:34 addons-052630 crio[664]: time="2024-09-23 12:44:34.171239867Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a089a3781add8595157385aad0947e2ea4b8c1571897261093173772dbd4029e,PodSandboxId:f8ba55a3e9041e3657843b6ffc7ffd919779e5373e2065f582f9201f5dbf0774,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727095295770795736,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-qzcw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0b254feb-b4af-4f12-9e52-a816f5d00bac,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd658c0598e6c49415ca300ec19c8efc652697d90ca659d5332bd0cc8f9da0ce,PodSandboxId:e9d41568c174048781bd2e547ce07b9b7f13bd648556c363403a06a7374416ad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727095155775653048,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 487480e4-f024-4e3c-9c18-a9aabd6129fb,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c427e0695fa7dfe118179b0685857c7d96bbed4dca69a80b42715eb28daf3f3,PodSandboxId:e0f536b5e92b1765bbec31f330b1cbfc55061818c897748a2f248d41719fbcd7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727094633948657283,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-gzksd,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 1b75c160-3198-402b-b135-861e77ac4482,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50f1ae050ce475e5a505a980ea72122b45036c60002591f0381f922671fc411a,PodSandboxId:17d85166b8277c2a9faa6b4607652c23931a05692eb0e979f495fa4c4552c2f9,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1727094606636049364,Labels:map[string]string{io.kubernetes.container.name: l
ocal-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-snqv8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 43c09017-cfad-4a08-b73c-bfba508afe73,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c2b9200f7a37ef1e8ff5b91ed0bd719859f18fd8e04d31045255bb46a563b5,PodSandboxId:dfa6385e052b942da39e7f1efb907744acba0e7c89c40514021b4c90d419d7bc,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727094558
710109886,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-2rhln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7c5ceb3-389e-43ff-b807-718f23f12b0f,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58bbd55bde08fee5d7aeb446829fa511ea633c6594a8f94dbc19f40954380b59,PodSandboxId:7fc2b63648c6ce7f74862f514ca11336f589ba36807a84f82b5fe966e703bba1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db300
2f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727094554932322734,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bc488f6-aa39-42bc-a0f5-173b2d7e07cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2700e6a975e0821a451d1a3a41fc665ed1652d4380515018e498434fe7a5a0ff,PodSandboxId:f5725c70d12571297f1fbc08fcf7c6634ea79b711270178cb2861d7a021f4a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727094551725672407,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cvw7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3de8bd3c-0baf-459b-94f8-f5d52ef1286d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f2e68fe054158153cd0c8a69f419c5737179e35fdb015065c2b0c5026242a00,PodSandboxId:d54027fa53db00e856f587b7398dfbee79868ce10d8c9bc030a174a635717867,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727094549016200714,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vn9km,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e10d00e-8de3-4f7e-ab59-d0f9e93b2f00,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d98809372a261156c26bb6e7875a9195290bc295be13167b14faf4bcfd7ac5a,PodSandboxId:1a45969da935e2684242fa5b07b35eaa8001d3fe9d4867c4f31f2152672a0eea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575ee
d91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727094538170986390,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd793e50c81059d44a1e6fde8a448895,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:137997c74feadea0b206e40066df0bab268bc86a43379e84dcea2cf1d5c37c85,PodSandboxId:8618182b0365790203283b2a6cd2de064a98724d33806cc9f4eedfc629ad8516,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904
b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727094538165838825,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7efdfb9180b7292c18423e02021138d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84885d234fc5d6c19b12360b7a7ed082cccb20946dcbedee5d7e8756cd36ffb0,PodSandboxId:2f48abf774e208d8f1e5e0d05f63bfa69400ab9e4bb0147be37e97f07eed1343,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca50
48cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727094538113594059,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1947c799ac122c11eb2c15f2bc9fdc08,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b706da2e61377c7ed468c79a4331b242c0011823c88614c8bc039cc285976d81,PodSandboxId:a16e26d2dc6966551d559c1a5d3db6a99724044ad4418a767d04c065c600a61d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727094538130237781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-052630,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c71f38e20d8cf8d860ac88cdd9241f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ebfff54-2d45-42ff-8a6a-577a691fed90 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a089a3781add8       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   f8ba55a3e9041       hello-world-app-55bf9c44b4-qzcw6
	dd658c0598e6c       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                         5 minutes ago       Running             nginx                     0                   e9d41568c1740       nginx
	4c427e0695fa7       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            14 minutes ago      Running             gcp-auth                  0                   e0f536b5e92b1       gcp-auth-89d5ffd79-gzksd
	50f1ae050ce47       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        14 minutes ago      Running             local-path-provisioner    0                   17d85166b8277       local-path-provisioner-86d989889c-snqv8
	54c2b9200f7a3       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   15 minutes ago      Running             metrics-server            0                   dfa6385e052b9       metrics-server-84c5f94fbc-2rhln
	58bbd55bde08f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        15 minutes ago      Running             storage-provisioner       0                   7fc2b63648c6c       storage-provisioner
	2700e6a975e08       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        15 minutes ago      Running             coredns                   0                   f5725c70d1257       coredns-7c65d6cfc9-cvw7x
	4f2e68fe05415       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        15 minutes ago      Running             kube-proxy                0                   d54027fa53db0       kube-proxy-vn9km
	2d98809372a26       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        15 minutes ago      Running             kube-scheduler            0                   1a45969da935e       kube-scheduler-addons-052630
	137997c74fead       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        15 minutes ago      Running             kube-controller-manager   0                   8618182b03657       kube-controller-manager-addons-052630
	b706da2e61377       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        15 minutes ago      Running             kube-apiserver            0                   a16e26d2dc696       kube-apiserver-addons-052630
	84885d234fc5d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        15 minutes ago      Running             etcd                      0                   2f48abf774e20       etcd-addons-052630
	
	
	==> coredns [2700e6a975e0821a451d1a3a41fc665ed1652d4380515018e498434fe7a5a0ff] <==
	[INFO] 10.244.0.7:59787 - 46467 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000082041s
	[INFO] 10.244.0.21:50719 - 3578 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000678697s
	[INFO] 10.244.0.21:59846 - 36057 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000185909s
	[INFO] 10.244.0.21:51800 - 41027 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000131443s
	[INFO] 10.244.0.21:60988 - 60393 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000092533s
	[INFO] 10.244.0.21:37198 - 50317 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000088047s
	[INFO] 10.244.0.21:53871 - 9639 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000076299s
	[INFO] 10.244.0.21:35205 - 14039 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.004685857s
	[INFO] 10.244.0.21:34331 - 9494 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00457672s
	[INFO] 10.244.0.7:43442 - 53421 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000388692s
	[INFO] 10.244.0.7:43442 - 62888 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000079319s
	[INFO] 10.244.0.7:55893 - 18422 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000147084s
	[INFO] 10.244.0.7:55893 - 9973 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000095576s
	[INFO] 10.244.0.7:47983 - 23764 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000188893s
	[INFO] 10.244.0.7:47983 - 4566 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000115139s
	[INFO] 10.244.0.7:50253 - 35636 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000151794s
	[INFO] 10.244.0.7:50253 - 39730 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000122834s
	[INFO] 10.244.0.7:52374 - 7303 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000165376s
	[INFO] 10.244.0.7:52374 - 65467 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000108039s
	[INFO] 10.244.0.7:38944 - 938 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000084751s
	[INFO] 10.244.0.7:38944 - 32437 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000074543s
	[INFO] 10.244.0.7:35936 - 54263 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000055079s
	[INFO] 10.244.0.7:35936 - 63221 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000100045s
	[INFO] 10.244.0.7:58342 - 30223 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00010406s
	[INFO] 10.244.0.7:58342 - 58610 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00006497s
	
	
	==> describe nodes <==
	Name:               addons-052630
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-052630
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=addons-052630
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T12_29_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-052630
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:29:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-052630
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 12:44:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 12:42:09 +0000   Mon, 23 Sep 2024 12:28:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 12:42:09 +0000   Mon, 23 Sep 2024 12:28:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 12:42:09 +0000   Mon, 23 Sep 2024 12:28:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 12:42:09 +0000   Mon, 23 Sep 2024 12:29:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    addons-052630
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 46d8dccd290a43399ed351791d0287b7
	  System UUID:                46d8dccd-290a-4339-9ed3-51791d0287b7
	  Boot ID:                    aef77f72-28ae-4358-8b71-243c7f96a73e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-qzcw6           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  gcp-auth                    gcp-auth-89d5ffd79-gzksd                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-7c65d6cfc9-cvw7x                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     15m
	  kube-system                 etcd-addons-052630                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         15m
	  kube-system                 kube-apiserver-addons-052630               250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-052630      200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-vn9km                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-052630               100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-84c5f94fbc-2rhln            100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         15m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  local-path-storage          local-path-provisioner-86d989889c-snqv8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node addons-052630 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node addons-052630 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node addons-052630 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node addons-052630 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node addons-052630 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node addons-052630 status is now: NodeHasSufficientPID
	  Normal  NodeReady                15m                kubelet          Node addons-052630 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node addons-052630 event: Registered Node addons-052630 in Controller
	
	
	==> dmesg <==
	[  +5.024950] kauditd_printk_skb: 96 callbacks suppressed
	[  +9.303213] kauditd_printk_skb: 112 callbacks suppressed
	[ +30.702636] kauditd_printk_skb: 2 callbacks suppressed
	[Sep23 12:30] kauditd_printk_skb: 27 callbacks suppressed
	[  +6.339998] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.675662] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.264623] kauditd_printk_skb: 74 callbacks suppressed
	[  +7.313349] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.509035] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.134771] kauditd_printk_skb: 52 callbacks suppressed
	[Sep23 12:31] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 12:33] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 12:36] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 12:38] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.581855] kauditd_printk_skb: 6 callbacks suppressed
	[Sep23 12:39] kauditd_printk_skb: 26 callbacks suppressed
	[ +14.124700] kauditd_printk_skb: 14 callbacks suppressed
	[  +8.773860] kauditd_printk_skb: 14 callbacks suppressed
	[  +8.300408] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.891663] kauditd_printk_skb: 6 callbacks suppressed
	[ +11.246442] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.325312] kauditd_printk_skb: 41 callbacks suppressed
	[Sep23 12:40] kauditd_printk_skb: 21 callbacks suppressed
	[Sep23 12:41] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.744867] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [84885d234fc5d6c19b12360b7a7ed082cccb20946dcbedee5d7e8756cd36ffb0] <==
	{"level":"warn","ts":"2024-09-23T12:38:45.586153Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T12:38:45.173553Z","time spent":"412.503245ms","remote":"127.0.0.1:41198","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":540,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-052630\" mod_revision:1922 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-052630\" value_size:486 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-052630\" > >"}
	{"level":"warn","ts":"2024-09-23T12:38:45.586522Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.799311ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T12:38:45.586554Z","caller":"traceutil/trace.go:171","msg":"trace[1061360463] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1969; }","duration":"240.827325ms","start":"2024-09-23T12:38:45.345712Z","end":"2024-09-23T12:38:45.586540Z","steps":["trace[1061360463] 'agreement among raft nodes before linearized reading'  (duration: 240.547514ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:38:45.586713Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.275793ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T12:38:45.586729Z","caller":"traceutil/trace.go:171","msg":"trace[1600622772] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1969; }","duration":"181.293593ms","start":"2024-09-23T12:38:45.405431Z","end":"2024-09-23T12:38:45.586724Z","steps":["trace[1600622772] 'agreement among raft nodes before linearized reading'  (duration: 181.261953ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:38:45.586889Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.90923ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T12:38:45.586903Z","caller":"traceutil/trace.go:171","msg":"trace[43504617] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1969; }","duration":"108.925213ms","start":"2024-09-23T12:38:45.477974Z","end":"2024-09-23T12:38:45.586899Z","steps":["trace[43504617] 'agreement among raft nodes before linearized reading'  (duration: 108.900464ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:38:45.586971Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.015116ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T12:38:45.586992Z","caller":"traceutil/trace.go:171","msg":"trace[1522914426] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1969; }","duration":"109.03651ms","start":"2024-09-23T12:38:45.477951Z","end":"2024-09-23T12:38:45.586988Z","steps":["trace[1522914426] 'agreement among raft nodes before linearized reading'  (duration: 109.008631ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:38:45.587155Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.402947ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-09-23T12:38:45.587172Z","caller":"traceutil/trace.go:171","msg":"trace[1003053304] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1969; }","duration":"122.420273ms","start":"2024-09-23T12:38:45.464747Z","end":"2024-09-23T12:38:45.587167Z","steps":["trace[1003053304] 'agreement among raft nodes before linearized reading'  (duration: 122.358904ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T12:38:45.588792Z","caller":"traceutil/trace.go:171","msg":"trace[1914231593] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1968; }","duration":"223.550909ms","start":"2024-09-23T12:38:45.361971Z","end":"2024-09-23T12:38:45.585522Z","steps":["trace[1914231593] 'range keys from in-memory index tree'  (duration: 223.329199ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T12:38:59.411855Z","caller":"traceutil/trace.go:171","msg":"trace[1835850910] transaction","detail":"{read_only:false; response_revision:2049; number_of_response:1; }","duration":"277.964156ms","start":"2024-09-23T12:38:59.133873Z","end":"2024-09-23T12:38:59.411837Z","steps":["trace[1835850910] 'process raft request'  (duration: 277.797273ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T12:38:59.412118Z","caller":"traceutil/trace.go:171","msg":"trace[494165466] linearizableReadLoop","detail":"{readStateIndex:2191; appliedIndex:2191; }","duration":"230.364595ms","start":"2024-09-23T12:38:59.181745Z","end":"2024-09-23T12:38:59.412110Z","steps":["trace[494165466] 'read index received'  (duration: 230.361284ms)","trace[494165466] 'applied index is now lower than readState.Index'  (duration: 2.661µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T12:38:59.412326Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.027808ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-23T12:38:59.412352Z","caller":"traceutil/trace.go:171","msg":"trace[1017305449] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:2049; }","duration":"166.068608ms","start":"2024-09-23T12:38:59.246275Z","end":"2024-09-23T12:38:59.412343Z","steps":["trace[1017305449] 'agreement among raft nodes before linearized reading'  (duration: 165.97691ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:38:59.412565Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"230.833337ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1114"}
	{"level":"info","ts":"2024-09-23T12:38:59.412600Z","caller":"traceutil/trace.go:171","msg":"trace[1433149078] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2049; }","duration":"230.871055ms","start":"2024-09-23T12:38:59.181723Z","end":"2024-09-23T12:38:59.412594Z","steps":["trace[1433149078] 'agreement among raft nodes before linearized reading'  (duration: 230.777381ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T12:38:59.490314Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1537}
	{"level":"info","ts":"2024-09-23T12:38:59.546892Z","caller":"traceutil/trace.go:171","msg":"trace[1169948736] transaction","detail":"{read_only:false; response_revision:2050; number_of_response:1; }","duration":"130.033368ms","start":"2024-09-23T12:38:59.416838Z","end":"2024-09-23T12:38:59.546872Z","steps":["trace[1169948736] 'process raft request'  (duration: 74.021052ms)","trace[1169948736] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; req_size:1095; } (duration: 55.627555ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T12:38:59.562704Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1537,"took":"71.895193ms","hash":851007697,"current-db-size-bytes":6762496,"current-db-size":"6.8 MB","current-db-size-in-use-bytes":3760128,"current-db-size-in-use":"3.8 MB"}
	{"level":"info","ts":"2024-09-23T12:38:59.562759Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":851007697,"revision":1537,"compact-revision":-1}
	{"level":"info","ts":"2024-09-23T12:43:59.498706Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2049}
	{"level":"info","ts":"2024-09-23T12:43:59.522741Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2049,"took":"23.212729ms","hash":795332340,"current-db-size-bytes":6762496,"current-db-size":"6.8 MB","current-db-size-in-use-bytes":4878336,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-23T12:43:59.523204Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":795332340,"revision":2049,"compact-revision":1537}
	
	
	==> gcp-auth [4c427e0695fa7dfe118179b0685857c7d96bbed4dca69a80b42715eb28daf3f3] <==
	2024/09/23 12:30:36 Ready to write response ...
	2024/09/23 12:30:36 Ready to marshal response ...
	2024/09/23 12:30:36 Ready to write response ...
	2024/09/23 12:38:40 Ready to marshal response ...
	2024/09/23 12:38:40 Ready to write response ...
	2024/09/23 12:38:40 Ready to marshal response ...
	2024/09/23 12:38:40 Ready to write response ...
	2024/09/23 12:38:40 Ready to marshal response ...
	2024/09/23 12:38:40 Ready to write response ...
	2024/09/23 12:38:51 Ready to marshal response ...
	2024/09/23 12:38:51 Ready to write response ...
	2024/09/23 12:38:54 Ready to marshal response ...
	2024/09/23 12:38:54 Ready to write response ...
	2024/09/23 12:39:10 Ready to marshal response ...
	2024/09/23 12:39:10 Ready to write response ...
	2024/09/23 12:39:17 Ready to marshal response ...
	2024/09/23 12:39:17 Ready to write response ...
	2024/09/23 12:39:50 Ready to marshal response ...
	2024/09/23 12:39:50 Ready to write response ...
	2024/09/23 12:39:50 Ready to marshal response ...
	2024/09/23 12:39:50 Ready to write response ...
	2024/09/23 12:40:02 Ready to marshal response ...
	2024/09/23 12:40:02 Ready to write response ...
	2024/09/23 12:41:32 Ready to marshal response ...
	2024/09/23 12:41:32 Ready to write response ...
	
	
	==> kernel <==
	 12:44:34 up 16 min,  0 users,  load average: 0.59, 0.50, 0.49
	Linux addons-052630 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b706da2e61377c7ed468c79a4331b242c0011823c88614c8bc039cc285976d81] <==
	E0923 12:30:23.508414       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.53.17:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.53.17:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.53.17:443: connect: connection refused" logger="UnhandledError"
	I0923 12:30:23.642288       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0923 12:38:40.310945       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.127.218"}
	I0923 12:39:05.053206       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0923 12:39:06.091724       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0923 12:39:07.866473       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0923 12:39:10.766646       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0923 12:39:10.966355       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.172.184"}
	I0923 12:39:32.696168       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 12:39:32.696258       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 12:39:32.715555       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 12:39:32.715618       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 12:39:32.748060       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 12:39:32.748123       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 12:39:32.774215       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 12:39:32.775062       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 12:39:32.821384       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 12:39:32.821480       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0923 12:39:33.774424       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0923 12:39:33.821825       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0923 12:39:33.904647       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0923 12:41:32.916892       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.99.88"}
	E0923 12:41:35.730354       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0923 12:41:38.452991       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0923 12:41:38.458981       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [137997c74feadea0b206e40066df0bab268bc86a43379e84dcea2cf1d5c37c85] <==
	W0923 12:42:46.287234       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:42:46.287348       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:42:58.047668       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:42:58.047721       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:43:02.879792       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:43:02.879969       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:43:16.775867       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:43:16.776109       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:43:16.860950       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:43:16.861049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:43:33.950770       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:43:33.950954       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:43:55.556261       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:43:55.556312       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:43:57.400839       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:43:57.400910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:44:01.036217       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:44:01.036268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:44:06.404414       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:44:06.404494       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 12:44:33.098776       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="10.707µs"
	W0923 12:44:33.159224       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:44:33.159358       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:44:33.769284       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:44:33.769363       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [4f2e68fe054158153cd0c8a69f419c5737179e35fdb015065c2b0c5026242a00] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 12:29:09.744228       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 12:29:09.770791       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.225"]
	E0923 12:29:09.770866       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 12:29:09.869461       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 12:29:09.869490       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 12:29:09.869514       1 server_linux.go:169] "Using iptables Proxier"
	I0923 12:29:09.873228       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 12:29:09.873652       1 server.go:483] "Version info" version="v1.31.1"
	I0923 12:29:09.873664       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 12:29:09.875209       1 config.go:199] "Starting service config controller"
	I0923 12:29:09.875235       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 12:29:09.875268       1 config.go:105] "Starting endpoint slice config controller"
	I0923 12:29:09.875271       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 12:29:09.875715       1 config.go:328] "Starting node config controller"
	I0923 12:29:09.875721       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 12:29:09.975594       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 12:29:09.976446       1 shared_informer.go:320] Caches are synced for node config
	I0923 12:29:09.976502       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [2d98809372a261156c26bb6e7875a9195290bc295be13167b14faf4bcfd7ac5a] <==
	W0923 12:29:00.681864       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 12:29:00.681896       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:00.681942       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 12:29:00.681966       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:00.681871       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 12:29:00.682069       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:00.682524       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 12:29:00.682555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:01.521067       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 12:29:01.521115       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:01.593793       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 12:29:01.593842       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:01.675102       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 12:29:01.675475       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:01.701107       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 12:29:01.701156       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:01.718193       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 12:29:01.718242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:01.750179       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 12:29:01.750230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:01.832371       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 12:29:01.832582       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:29:01.940561       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 12:29:01.940868       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0923 12:29:04.675339       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 12:44:03 addons-052630 kubelet[1207]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 12:44:03 addons-052630 kubelet[1207]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 12:44:03 addons-052630 kubelet[1207]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 12:44:03 addons-052630 kubelet[1207]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 12:44:03 addons-052630 kubelet[1207]: E0923 12:44:03.563563    1207 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727095443562895389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 12:44:03 addons-052630 kubelet[1207]: E0923 12:44:03.563680    1207 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727095443562895389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 12:44:06 addons-052630 kubelet[1207]: E0923 12:44:06.123114    1207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="54b6f502-dc45-4c6f-b200-f29eb7e0a0c3"
	Sep 23 12:44:13 addons-052630 kubelet[1207]: E0923 12:44:13.566753    1207 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727095453566203967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 12:44:13 addons-052630 kubelet[1207]: E0923 12:44:13.566803    1207 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727095453566203967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 12:44:18 addons-052630 kubelet[1207]: E0923 12:44:18.123800    1207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="54b6f502-dc45-4c6f-b200-f29eb7e0a0c3"
	Sep 23 12:44:23 addons-052630 kubelet[1207]: E0923 12:44:23.569956    1207 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727095463569374198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 12:44:23 addons-052630 kubelet[1207]: E0923 12:44:23.570083    1207 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727095463569374198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 12:44:33 addons-052630 kubelet[1207]: E0923 12:44:33.125432    1207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="54b6f502-dc45-4c6f-b200-f29eb7e0a0c3"
	Sep 23 12:44:33 addons-052630 kubelet[1207]: E0923 12:44:33.574163    1207 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727095473573593897,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 12:44:33 addons-052630 kubelet[1207]: E0923 12:44:33.574202    1207 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727095473573593897,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 12:44:34 addons-052630 kubelet[1207]: I0923 12:44:34.556697    1207 scope.go:117] "RemoveContainer" containerID="54c2b9200f7a37ef1e8ff5b91ed0bd719859f18fd8e04d31045255bb46a563b5"
	Sep 23 12:44:34 addons-052630 kubelet[1207]: I0923 12:44:34.580070    1207 scope.go:117] "RemoveContainer" containerID="54c2b9200f7a37ef1e8ff5b91ed0bd719859f18fd8e04d31045255bb46a563b5"
	Sep 23 12:44:34 addons-052630 kubelet[1207]: E0923 12:44:34.580694    1207 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54c2b9200f7a37ef1e8ff5b91ed0bd719859f18fd8e04d31045255bb46a563b5\": container with ID starting with 54c2b9200f7a37ef1e8ff5b91ed0bd719859f18fd8e04d31045255bb46a563b5 not found: ID does not exist" containerID="54c2b9200f7a37ef1e8ff5b91ed0bd719859f18fd8e04d31045255bb46a563b5"
	Sep 23 12:44:34 addons-052630 kubelet[1207]: I0923 12:44:34.580734    1207 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54c2b9200f7a37ef1e8ff5b91ed0bd719859f18fd8e04d31045255bb46a563b5"} err="failed to get container status \"54c2b9200f7a37ef1e8ff5b91ed0bd719859f18fd8e04d31045255bb46a563b5\": rpc error: code = NotFound desc = could not find container \"54c2b9200f7a37ef1e8ff5b91ed0bd719859f18fd8e04d31045255bb46a563b5\": container with ID starting with 54c2b9200f7a37ef1e8ff5b91ed0bd719859f18fd8e04d31045255bb46a563b5 not found: ID does not exist"
	Sep 23 12:44:34 addons-052630 kubelet[1207]: I0923 12:44:34.595257    1207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whqsq\" (UniqueName: \"kubernetes.io/projected/e7c5ceb3-389e-43ff-b807-718f23f12b0f-kube-api-access-whqsq\") pod \"e7c5ceb3-389e-43ff-b807-718f23f12b0f\" (UID: \"e7c5ceb3-389e-43ff-b807-718f23f12b0f\") "
	Sep 23 12:44:34 addons-052630 kubelet[1207]: I0923 12:44:34.595298    1207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e7c5ceb3-389e-43ff-b807-718f23f12b0f-tmp-dir\") pod \"e7c5ceb3-389e-43ff-b807-718f23f12b0f\" (UID: \"e7c5ceb3-389e-43ff-b807-718f23f12b0f\") "
	Sep 23 12:44:34 addons-052630 kubelet[1207]: I0923 12:44:34.596344    1207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7c5ceb3-389e-43ff-b807-718f23f12b0f-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "e7c5ceb3-389e-43ff-b807-718f23f12b0f" (UID: "e7c5ceb3-389e-43ff-b807-718f23f12b0f"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 23 12:44:34 addons-052630 kubelet[1207]: I0923 12:44:34.601321    1207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7c5ceb3-389e-43ff-b807-718f23f12b0f-kube-api-access-whqsq" (OuterVolumeSpecName: "kube-api-access-whqsq") pod "e7c5ceb3-389e-43ff-b807-718f23f12b0f" (UID: "e7c5ceb3-389e-43ff-b807-718f23f12b0f"). InnerVolumeSpecName "kube-api-access-whqsq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 12:44:34 addons-052630 kubelet[1207]: I0923 12:44:34.696132    1207 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-whqsq\" (UniqueName: \"kubernetes.io/projected/e7c5ceb3-389e-43ff-b807-718f23f12b0f-kube-api-access-whqsq\") on node \"addons-052630\" DevicePath \"\""
	Sep 23 12:44:34 addons-052630 kubelet[1207]: I0923 12:44:34.696164    1207 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e7c5ceb3-389e-43ff-b807-718f23f12b0f-tmp-dir\") on node \"addons-052630\" DevicePath \"\""
	
	
	==> storage-provisioner [58bbd55bde08fee5d7aeb446829fa511ea633c6594a8f94dbc19f40954380b59] <==
	I0923 12:29:15.418528       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 12:29:15.469448       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 12:29:15.469505       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 12:29:15.499374       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 12:29:15.512080       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-052630_666822b7-806c-46b8-b021-ef12b62fd031!
	I0923 12:29:15.512828       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"16ab68d2-163f-4497-86c2-19800b48c856", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-052630_666822b7-806c-46b8-b021-ef12b62fd031 became leader
	I0923 12:29:15.856800       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-052630_666822b7-806c-46b8-b021-ef12b62fd031!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-052630 -n addons-052630
helpers_test.go:261: (dbg) Run:  kubectl --context addons-052630 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-052630 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-052630 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-052630/192.168.39.225
	Start Time:       Mon, 23 Sep 2024 12:30:36 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hx7h2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hx7h2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  13m                   default-scheduler  Successfully assigned default/busybox to addons-052630
	  Normal   Pulling    12m (x4 over 13m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     12m (x4 over 13m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     12m (x4 over 13m)     kubelet            Error: ErrImagePull
	  Warning  Failed     12m (x6 over 13m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m46s (x43 over 13m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (355.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 node stop m02 -v=7 --alsologtostderr
E0923 13:01:10.156563  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:01:51.118892  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-097312 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.490181055s)

                                                
                                                
-- stdout --
	* Stopping node "ha-097312-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 13:01:07.055428  686455 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:01:07.055681  686455 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:01:07.055694  686455 out.go:358] Setting ErrFile to fd 2...
	I0923 13:01:07.055699  686455 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:01:07.055957  686455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-662205/.minikube/bin
	I0923 13:01:07.056273  686455 mustload.go:65] Loading cluster: ha-097312
	I0923 13:01:07.056688  686455 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:01:07.056710  686455 stop.go:39] StopHost: ha-097312-m02
	I0923 13:01:07.057061  686455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:01:07.057102  686455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:01:07.074044  686455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35071
	I0923 13:01:07.074615  686455 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:01:07.075519  686455 main.go:141] libmachine: Using API Version  1
	I0923 13:01:07.075551  686455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:01:07.075951  686455 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:01:07.078564  686455 out.go:177] * Stopping node "ha-097312-m02"  ...
	I0923 13:01:07.079914  686455 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0923 13:01:07.079950  686455 main.go:141] libmachine: (ha-097312-m02) Calling .DriverName
	I0923 13:01:07.080222  686455 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0923 13:01:07.080260  686455 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 13:01:07.083947  686455 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 13:01:07.084425  686455 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 13:01:07.084456  686455 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 13:01:07.084576  686455 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 13:01:07.084791  686455 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 13:01:07.084936  686455 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 13:01:07.085051  686455 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/id_rsa Username:docker}
	I0923 13:01:07.169714  686455 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0923 13:01:07.223912  686455 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0923 13:01:07.278383  686455 main.go:141] libmachine: Stopping "ha-097312-m02"...
	I0923 13:01:07.278433  686455 main.go:141] libmachine: (ha-097312-m02) Calling .GetState
	I0923 13:01:07.280187  686455 main.go:141] libmachine: (ha-097312-m02) Calling .Stop
	I0923 13:01:07.283844  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 0/120
	I0923 13:01:08.285790  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 1/120
	I0923 13:01:09.287424  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 2/120
	I0923 13:01:10.288942  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 3/120
	I0923 13:01:11.291346  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 4/120
	I0923 13:01:12.293447  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 5/120
	I0923 13:01:13.294997  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 6/120
	I0923 13:01:14.296862  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 7/120
	I0923 13:01:15.298480  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 8/120
	I0923 13:01:16.299793  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 9/120
	I0923 13:01:17.301740  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 10/120
	I0923 13:01:18.303625  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 11/120
	I0923 13:01:19.305333  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 12/120
	I0923 13:01:20.307129  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 13/120
	I0923 13:01:21.308813  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 14/120
	I0923 13:01:22.311440  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 15/120
	I0923 13:01:23.313303  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 16/120
	I0923 13:01:24.314863  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 17/120
	I0923 13:01:25.316681  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 18/120
	I0923 13:01:26.318341  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 19/120
	I0923 13:01:27.319983  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 20/120
	I0923 13:01:28.322570  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 21/120
	I0923 13:01:29.324039  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 22/120
	I0923 13:01:30.325651  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 23/120
	I0923 13:01:31.328189  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 24/120
	I0923 13:01:32.330905  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 25/120
	I0923 13:01:33.333145  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 26/120
	I0923 13:01:34.334925  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 27/120
	I0923 13:01:35.336545  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 28/120
	I0923 13:01:36.338138  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 29/120
	I0923 13:01:37.340045  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 30/120
	I0923 13:01:38.341492  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 31/120
	I0923 13:01:39.343012  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 32/120
	I0923 13:01:40.344309  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 33/120
	I0923 13:01:41.346063  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 34/120
	I0923 13:01:42.348300  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 35/120
	I0923 13:01:43.350122  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 36/120
	I0923 13:01:44.352447  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 37/120
	I0923 13:01:45.354059  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 38/120
	I0923 13:01:46.355401  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 39/120
	I0923 13:01:47.357350  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 40/120
	I0923 13:01:48.358859  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 41/120
	I0923 13:01:49.360484  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 42/120
	I0923 13:01:50.361756  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 43/120
	I0923 13:01:51.363557  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 44/120
	I0923 13:01:52.365663  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 45/120
	I0923 13:01:53.367244  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 46/120
	I0923 13:01:54.368670  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 47/120
	I0923 13:01:55.370332  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 48/120
	I0923 13:01:56.371721  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 49/120
	I0923 13:01:57.374018  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 50/120
	I0923 13:01:58.376348  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 51/120
	I0923 13:01:59.377820  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 52/120
	I0923 13:02:00.379204  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 53/120
	I0923 13:02:01.380842  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 54/120
	I0923 13:02:02.383182  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 55/120
	I0923 13:02:03.384710  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 56/120
	I0923 13:02:04.386480  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 57/120
	I0923 13:02:05.388761  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 58/120
	I0923 13:02:06.390187  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 59/120
	I0923 13:02:07.392666  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 60/120
	I0923 13:02:08.393926  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 61/120
	I0923 13:02:09.395231  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 62/120
	I0923 13:02:10.396539  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 63/120
	I0923 13:02:11.397970  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 64/120
	I0923 13:02:12.400378  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 65/120
	I0923 13:02:13.401802  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 66/120
	I0923 13:02:14.403372  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 67/120
	I0923 13:02:15.405046  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 68/120
	I0923 13:02:16.406800  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 69/120
	I0923 13:02:17.408711  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 70/120
	I0923 13:02:18.411116  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 71/120
	I0923 13:02:19.412481  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 72/120
	I0923 13:02:20.413975  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 73/120
	I0923 13:02:21.416566  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 74/120
	I0923 13:02:22.419094  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 75/120
	I0923 13:02:23.420660  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 76/120
	I0923 13:02:24.422215  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 77/120
	I0923 13:02:25.424179  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 78/120
	I0923 13:02:26.425590  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 79/120
	I0923 13:02:27.427548  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 80/120
	I0923 13:02:28.429380  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 81/120
	I0923 13:02:29.431830  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 82/120
	I0923 13:02:30.433717  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 83/120
	I0923 13:02:31.435386  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 84/120
	I0923 13:02:32.437539  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 85/120
	I0923 13:02:33.439085  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 86/120
	I0923 13:02:34.440945  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 87/120
	I0923 13:02:35.442661  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 88/120
	I0923 13:02:36.443946  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 89/120
	I0923 13:02:37.446320  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 90/120
	I0923 13:02:38.447701  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 91/120
	I0923 13:02:39.449342  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 92/120
	I0923 13:02:40.450880  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 93/120
	I0923 13:02:41.452248  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 94/120
	I0923 13:02:42.454398  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 95/120
	I0923 13:02:43.455789  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 96/120
	I0923 13:02:44.457303  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 97/120
	I0923 13:02:45.458962  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 98/120
	I0923 13:02:46.460554  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 99/120
	I0923 13:02:47.461918  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 100/120
	I0923 13:02:48.463664  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 101/120
	I0923 13:02:49.465181  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 102/120
	I0923 13:02:50.466840  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 103/120
	I0923 13:02:51.468786  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 104/120
	I0923 13:02:52.470960  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 105/120
	I0923 13:02:53.472802  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 106/120
	I0923 13:02:54.474250  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 107/120
	I0923 13:02:55.476042  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 108/120
	I0923 13:02:56.477585  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 109/120
	I0923 13:02:57.479134  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 110/120
	I0923 13:02:58.480864  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 111/120
	I0923 13:02:59.482544  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 112/120
	I0923 13:03:00.484448  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 113/120
	I0923 13:03:01.486120  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 114/120
	I0923 13:03:02.488436  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 115/120
	I0923 13:03:03.489936  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 116/120
	I0923 13:03:04.491273  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 117/120
	I0923 13:03:05.492859  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 118/120
	I0923 13:03:06.495088  686455 main.go:141] libmachine: (ha-097312-m02) Waiting for machine to stop 119/120
	I0923 13:03:07.495775  686455 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0923 13:03:07.495964  686455 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-097312 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 status -v=7 --alsologtostderr
E0923 13:03:13.042179  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Done: out/minikube-linux-amd64 -p ha-097312 status -v=7 --alsologtostderr: (18.714105218s)
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-097312 status -v=7 --alsologtostderr": 
ha_test.go:378: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-097312 status -v=7 --alsologtostderr": 
ha_test.go:381: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-097312 status -v=7 --alsologtostderr": 
ha_test.go:384: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-097312 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-097312 -n ha-097312
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-097312 logs -n 25: (1.423109875s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-097312 cp ha-097312-m03:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3809348295/001/cp-test_ha-097312-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m03:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312:/home/docker/cp-test_ha-097312-m03_ha-097312.txt                       |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n ha-097312 sudo cat                                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m03_ha-097312.txt                                 |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m03:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m02:/home/docker/cp-test_ha-097312-m03_ha-097312-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n ha-097312-m02 sudo cat                                          | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m03_ha-097312-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m03:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04:/home/docker/cp-test_ha-097312-m03_ha-097312-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n ha-097312-m04 sudo cat                                          | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m03_ha-097312-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-097312 cp testdata/cp-test.txt                                                | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m04:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3809348295/001/cp-test_ha-097312-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m04:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312:/home/docker/cp-test_ha-097312-m04_ha-097312.txt                       |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n ha-097312 sudo cat                                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m04_ha-097312.txt                                 |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m04:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m02:/home/docker/cp-test_ha-097312-m04_ha-097312-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n ha-097312-m02 sudo cat                                          | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m04_ha-097312-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m04:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m03:/home/docker/cp-test_ha-097312-m04_ha-097312-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n ha-097312-m03 sudo cat                                          | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m04_ha-097312-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-097312 node stop m02 -v=7                                                     | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 12:56:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 12:56:21.828511  682373 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:56:21.828805  682373 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:56:21.828814  682373 out.go:358] Setting ErrFile to fd 2...
	I0923 12:56:21.828819  682373 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:56:21.829029  682373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-662205/.minikube/bin
	I0923 12:56:21.829675  682373 out.go:352] Setting JSON to false
	I0923 12:56:21.830688  682373 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9525,"bootTime":1727086657,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 12:56:21.830795  682373 start.go:139] virtualization: kvm guest
	I0923 12:56:21.833290  682373 out.go:177] * [ha-097312] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 12:56:21.834872  682373 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 12:56:21.834925  682373 notify.go:220] Checking for updates...
	I0923 12:56:21.837758  682373 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:56:21.839025  682373 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 12:56:21.840177  682373 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:56:21.841224  682373 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 12:56:21.842534  682373 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 12:56:21.843976  682373 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:56:21.880376  682373 out.go:177] * Using the kvm2 driver based on user configuration
	I0923 12:56:21.881602  682373 start.go:297] selected driver: kvm2
	I0923 12:56:21.881616  682373 start.go:901] validating driver "kvm2" against <nil>
	I0923 12:56:21.881629  682373 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 12:56:21.882531  682373 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:56:21.882644  682373 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19690-662205/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 12:56:21.899127  682373 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 12:56:21.899181  682373 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 12:56:21.899449  682373 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:56:21.899480  682373 cni.go:84] Creating CNI manager for ""
	I0923 12:56:21.899527  682373 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0923 12:56:21.899535  682373 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 12:56:21.899626  682373 start.go:340] cluster config:
	{Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-097312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0923 12:56:21.899742  682373 iso.go:125] acquiring lock: {Name:mkb968a95eae3838cd5c328cf3385c2ef4ff2c8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:56:21.901896  682373 out.go:177] * Starting "ha-097312" primary control-plane node in "ha-097312" cluster
	I0923 12:56:21.903202  682373 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 12:56:21.903247  682373 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 12:56:21.903256  682373 cache.go:56] Caching tarball of preloaded images
	I0923 12:56:21.903357  682373 preload.go:172] Found /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 12:56:21.903371  682373 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 12:56:21.903879  682373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 12:56:21.903923  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json: {Name:mkf732f530eb47d72142f084d9eb3cd0edcde9eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:56:21.904117  682373 start.go:360] acquireMachinesLock for ha-097312: {Name:mka98570d4b4becad22300323f1f88e64743eec3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 12:56:21.904165  682373 start.go:364] duration metric: took 29.656µs to acquireMachinesLock for "ha-097312"
	I0923 12:56:21.904184  682373 start.go:93] Provisioning new machine with config: &{Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-097312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:56:21.904282  682373 start.go:125] createHost starting for "" (driver="kvm2")
	I0923 12:56:21.905963  682373 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 12:56:21.906128  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:56:21.906175  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:56:21.921537  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41699
	I0923 12:56:21.922061  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:56:21.922650  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:56:21.922667  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:56:21.923007  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:56:21.923179  682373 main.go:141] libmachine: (ha-097312) Calling .GetMachineName
	I0923 12:56:21.923321  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:56:21.923466  682373 start.go:159] libmachine.API.Create for "ha-097312" (driver="kvm2")
	I0923 12:56:21.923507  682373 client.go:168] LocalClient.Create starting
	I0923 12:56:21.923545  682373 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem
	I0923 12:56:21.923585  682373 main.go:141] libmachine: Decoding PEM data...
	I0923 12:56:21.923623  682373 main.go:141] libmachine: Parsing certificate...
	I0923 12:56:21.923700  682373 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem
	I0923 12:56:21.923738  682373 main.go:141] libmachine: Decoding PEM data...
	I0923 12:56:21.923763  682373 main.go:141] libmachine: Parsing certificate...
	I0923 12:56:21.923785  682373 main.go:141] libmachine: Running pre-create checks...
	I0923 12:56:21.923796  682373 main.go:141] libmachine: (ha-097312) Calling .PreCreateCheck
	I0923 12:56:21.924185  682373 main.go:141] libmachine: (ha-097312) Calling .GetConfigRaw
	I0923 12:56:21.924615  682373 main.go:141] libmachine: Creating machine...
	I0923 12:56:21.924630  682373 main.go:141] libmachine: (ha-097312) Calling .Create
	I0923 12:56:21.924800  682373 main.go:141] libmachine: (ha-097312) Creating KVM machine...
	I0923 12:56:21.926163  682373 main.go:141] libmachine: (ha-097312) DBG | found existing default KVM network
	I0923 12:56:21.926884  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:21.926751  682396 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111f0}
	I0923 12:56:21.926933  682373 main.go:141] libmachine: (ha-097312) DBG | created network xml: 
	I0923 12:56:21.926948  682373 main.go:141] libmachine: (ha-097312) DBG | <network>
	I0923 12:56:21.926958  682373 main.go:141] libmachine: (ha-097312) DBG |   <name>mk-ha-097312</name>
	I0923 12:56:21.926973  682373 main.go:141] libmachine: (ha-097312) DBG |   <dns enable='no'/>
	I0923 12:56:21.926984  682373 main.go:141] libmachine: (ha-097312) DBG |   
	I0923 12:56:21.926995  682373 main.go:141] libmachine: (ha-097312) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0923 12:56:21.927005  682373 main.go:141] libmachine: (ha-097312) DBG |     <dhcp>
	I0923 12:56:21.927010  682373 main.go:141] libmachine: (ha-097312) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0923 12:56:21.927018  682373 main.go:141] libmachine: (ha-097312) DBG |     </dhcp>
	I0923 12:56:21.927023  682373 main.go:141] libmachine: (ha-097312) DBG |   </ip>
	I0923 12:56:21.927028  682373 main.go:141] libmachine: (ha-097312) DBG |   
	I0923 12:56:21.927037  682373 main.go:141] libmachine: (ha-097312) DBG | </network>
	I0923 12:56:21.927049  682373 main.go:141] libmachine: (ha-097312) DBG | 
	I0923 12:56:21.932476  682373 main.go:141] libmachine: (ha-097312) DBG | trying to create private KVM network mk-ha-097312 192.168.39.0/24...
	I0923 12:56:22.007044  682373 main.go:141] libmachine: (ha-097312) DBG | private KVM network mk-ha-097312 192.168.39.0/24 created
	I0923 12:56:22.007081  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:22.007015  682396 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:56:22.007094  682373 main.go:141] libmachine: (ha-097312) Setting up store path in /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312 ...
	I0923 12:56:22.007109  682373 main.go:141] libmachine: (ha-097312) Building disk image from file:///home/jenkins/minikube-integration/19690-662205/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 12:56:22.007154  682373 main.go:141] libmachine: (ha-097312) Downloading /home/jenkins/minikube-integration/19690-662205/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19690-662205/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 12:56:22.288956  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:22.288821  682396 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa...
	I0923 12:56:22.447093  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:22.446935  682396 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/ha-097312.rawdisk...
	I0923 12:56:22.447150  682373 main.go:141] libmachine: (ha-097312) DBG | Writing magic tar header
	I0923 12:56:22.447245  682373 main.go:141] libmachine: (ha-097312) DBG | Writing SSH key tar header
	I0923 12:56:22.447298  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:22.447079  682396 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312 ...
	I0923 12:56:22.447319  682373 main.go:141] libmachine: (ha-097312) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312 (perms=drwx------)
	I0923 12:56:22.447334  682373 main.go:141] libmachine: (ha-097312) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube/machines (perms=drwxr-xr-x)
	I0923 12:56:22.447344  682373 main.go:141] libmachine: (ha-097312) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube (perms=drwxr-xr-x)
	I0923 12:56:22.447360  682373 main.go:141] libmachine: (ha-097312) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205 (perms=drwxrwxr-x)
	I0923 12:56:22.447372  682373 main.go:141] libmachine: (ha-097312) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 12:56:22.447381  682373 main.go:141] libmachine: (ha-097312) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312
	I0923 12:56:22.447394  682373 main.go:141] libmachine: (ha-097312) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 12:56:22.447407  682373 main.go:141] libmachine: (ha-097312) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube/machines
	I0923 12:56:22.447421  682373 main.go:141] libmachine: (ha-097312) Creating domain...
	I0923 12:56:22.447439  682373 main.go:141] libmachine: (ha-097312) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:56:22.447455  682373 main.go:141] libmachine: (ha-097312) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205
	I0923 12:56:22.447468  682373 main.go:141] libmachine: (ha-097312) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 12:56:22.447479  682373 main.go:141] libmachine: (ha-097312) DBG | Checking permissions on dir: /home/jenkins
	I0923 12:56:22.447492  682373 main.go:141] libmachine: (ha-097312) DBG | Checking permissions on dir: /home
	I0923 12:56:22.447500  682373 main.go:141] libmachine: (ha-097312) DBG | Skipping /home - not owner
	I0923 12:56:22.448456  682373 main.go:141] libmachine: (ha-097312) define libvirt domain using xml: 
	I0923 12:56:22.448482  682373 main.go:141] libmachine: (ha-097312) <domain type='kvm'>
	I0923 12:56:22.448488  682373 main.go:141] libmachine: (ha-097312)   <name>ha-097312</name>
	I0923 12:56:22.448493  682373 main.go:141] libmachine: (ha-097312)   <memory unit='MiB'>2200</memory>
	I0923 12:56:22.448498  682373 main.go:141] libmachine: (ha-097312)   <vcpu>2</vcpu>
	I0923 12:56:22.448502  682373 main.go:141] libmachine: (ha-097312)   <features>
	I0923 12:56:22.448506  682373 main.go:141] libmachine: (ha-097312)     <acpi/>
	I0923 12:56:22.448510  682373 main.go:141] libmachine: (ha-097312)     <apic/>
	I0923 12:56:22.448514  682373 main.go:141] libmachine: (ha-097312)     <pae/>
	I0923 12:56:22.448526  682373 main.go:141] libmachine: (ha-097312)     
	I0923 12:56:22.448561  682373 main.go:141] libmachine: (ha-097312)   </features>
	I0923 12:56:22.448583  682373 main.go:141] libmachine: (ha-097312)   <cpu mode='host-passthrough'>
	I0923 12:56:22.448588  682373 main.go:141] libmachine: (ha-097312)   
	I0923 12:56:22.448594  682373 main.go:141] libmachine: (ha-097312)   </cpu>
	I0923 12:56:22.448600  682373 main.go:141] libmachine: (ha-097312)   <os>
	I0923 12:56:22.448607  682373 main.go:141] libmachine: (ha-097312)     <type>hvm</type>
	I0923 12:56:22.448634  682373 main.go:141] libmachine: (ha-097312)     <boot dev='cdrom'/>
	I0923 12:56:22.448653  682373 main.go:141] libmachine: (ha-097312)     <boot dev='hd'/>
	I0923 12:56:22.448665  682373 main.go:141] libmachine: (ha-097312)     <bootmenu enable='no'/>
	I0923 12:56:22.448674  682373 main.go:141] libmachine: (ha-097312)   </os>
	I0923 12:56:22.448693  682373 main.go:141] libmachine: (ha-097312)   <devices>
	I0923 12:56:22.448701  682373 main.go:141] libmachine: (ha-097312)     <disk type='file' device='cdrom'>
	I0923 12:56:22.448711  682373 main.go:141] libmachine: (ha-097312)       <source file='/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/boot2docker.iso'/>
	I0923 12:56:22.448722  682373 main.go:141] libmachine: (ha-097312)       <target dev='hdc' bus='scsi'/>
	I0923 12:56:22.448735  682373 main.go:141] libmachine: (ha-097312)       <readonly/>
	I0923 12:56:22.448746  682373 main.go:141] libmachine: (ha-097312)     </disk>
	I0923 12:56:22.448754  682373 main.go:141] libmachine: (ha-097312)     <disk type='file' device='disk'>
	I0923 12:56:22.448761  682373 main.go:141] libmachine: (ha-097312)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 12:56:22.448771  682373 main.go:141] libmachine: (ha-097312)       <source file='/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/ha-097312.rawdisk'/>
	I0923 12:56:22.448779  682373 main.go:141] libmachine: (ha-097312)       <target dev='hda' bus='virtio'/>
	I0923 12:56:22.448783  682373 main.go:141] libmachine: (ha-097312)     </disk>
	I0923 12:56:22.448790  682373 main.go:141] libmachine: (ha-097312)     <interface type='network'>
	I0923 12:56:22.448799  682373 main.go:141] libmachine: (ha-097312)       <source network='mk-ha-097312'/>
	I0923 12:56:22.448805  682373 main.go:141] libmachine: (ha-097312)       <model type='virtio'/>
	I0923 12:56:22.448810  682373 main.go:141] libmachine: (ha-097312)     </interface>
	I0923 12:56:22.448820  682373 main.go:141] libmachine: (ha-097312)     <interface type='network'>
	I0923 12:56:22.448833  682373 main.go:141] libmachine: (ha-097312)       <source network='default'/>
	I0923 12:56:22.448840  682373 main.go:141] libmachine: (ha-097312)       <model type='virtio'/>
	I0923 12:56:22.448845  682373 main.go:141] libmachine: (ha-097312)     </interface>
	I0923 12:56:22.448855  682373 main.go:141] libmachine: (ha-097312)     <serial type='pty'>
	I0923 12:56:22.448860  682373 main.go:141] libmachine: (ha-097312)       <target port='0'/>
	I0923 12:56:22.448869  682373 main.go:141] libmachine: (ha-097312)     </serial>
	I0923 12:56:22.448875  682373 main.go:141] libmachine: (ha-097312)     <console type='pty'>
	I0923 12:56:22.448885  682373 main.go:141] libmachine: (ha-097312)       <target type='serial' port='0'/>
	I0923 12:56:22.448897  682373 main.go:141] libmachine: (ha-097312)     </console>
	I0923 12:56:22.448912  682373 main.go:141] libmachine: (ha-097312)     <rng model='virtio'>
	I0923 12:56:22.448925  682373 main.go:141] libmachine: (ha-097312)       <backend model='random'>/dev/random</backend>
	I0923 12:56:22.448933  682373 main.go:141] libmachine: (ha-097312)     </rng>
	I0923 12:56:22.448940  682373 main.go:141] libmachine: (ha-097312)     
	I0923 12:56:22.448949  682373 main.go:141] libmachine: (ha-097312)     
	I0923 12:56:22.448957  682373 main.go:141] libmachine: (ha-097312)   </devices>
	I0923 12:56:22.448965  682373 main.go:141] libmachine: (ha-097312) </domain>
	I0923 12:56:22.448975  682373 main.go:141] libmachine: (ha-097312) 
	I0923 12:56:22.453510  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:86:5c:23 in network default
	I0923 12:56:22.454136  682373 main.go:141] libmachine: (ha-097312) Ensuring networks are active...
	I0923 12:56:22.454160  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:22.455025  682373 main.go:141] libmachine: (ha-097312) Ensuring network default is active
	I0923 12:56:22.455403  682373 main.go:141] libmachine: (ha-097312) Ensuring network mk-ha-097312 is active
	I0923 12:56:22.455910  682373 main.go:141] libmachine: (ha-097312) Getting domain xml...
	I0923 12:56:22.456804  682373 main.go:141] libmachine: (ha-097312) Creating domain...
	I0923 12:56:23.684285  682373 main.go:141] libmachine: (ha-097312) Waiting to get IP...
	I0923 12:56:23.685050  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:23.685483  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:23.685549  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:23.685457  682396 retry.go:31] will retry after 284.819092ms: waiting for machine to come up
	I0923 12:56:23.972224  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:23.972712  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:23.972742  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:23.972658  682396 retry.go:31] will retry after 296.568661ms: waiting for machine to come up
	I0923 12:56:24.271431  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:24.271859  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:24.271878  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:24.271837  682396 retry.go:31] will retry after 305.883088ms: waiting for machine to come up
	I0923 12:56:24.579449  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:24.579888  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:24.579915  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:24.579844  682396 retry.go:31] will retry after 417.526062ms: waiting for machine to come up
	I0923 12:56:24.999494  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:24.999869  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:24.999897  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:24.999819  682396 retry.go:31] will retry after 647.110055ms: waiting for machine to come up
	I0923 12:56:25.648547  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:25.649112  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:25.649144  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:25.649045  682396 retry.go:31] will retry after 699.974926ms: waiting for machine to come up
	I0923 12:56:26.350970  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:26.351427  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:26.351457  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:26.351401  682396 retry.go:31] will retry after 822.151225ms: waiting for machine to come up
	I0923 12:56:27.175278  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:27.175659  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:27.175688  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:27.175617  682396 retry.go:31] will retry after 1.471324905s: waiting for machine to come up
	I0923 12:56:28.649431  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:28.649912  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:28.649939  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:28.649865  682396 retry.go:31] will retry after 1.835415418s: waiting for machine to come up
	I0923 12:56:30.487327  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:30.487788  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:30.487842  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:30.487762  682396 retry.go:31] will retry after 1.452554512s: waiting for machine to come up
	I0923 12:56:31.941929  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:31.942466  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:31.942496  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:31.942406  682396 retry.go:31] will retry after 2.833337463s: waiting for machine to come up
	I0923 12:56:34.777034  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:34.777417  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:34.777435  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:34.777385  682396 retry.go:31] will retry after 2.506824406s: waiting for machine to come up
	I0923 12:56:37.285508  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:37.285975  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:37.286004  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:37.285923  682396 retry.go:31] will retry after 2.872661862s: waiting for machine to come up
	I0923 12:56:40.162076  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:40.162525  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:40.162542  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:40.162478  682396 retry.go:31] will retry after 3.815832653s: waiting for machine to come up
	I0923 12:56:43.980644  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:43.981295  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has current primary IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:43.981341  682373 main.go:141] libmachine: (ha-097312) Found IP for machine: 192.168.39.160
	I0923 12:56:43.981355  682373 main.go:141] libmachine: (ha-097312) Reserving static IP address...
	I0923 12:56:43.981713  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find host DHCP lease matching {name: "ha-097312", mac: "52:54:00:06:7f:c5", ip: "192.168.39.160"} in network mk-ha-097312
	I0923 12:56:44.063688  682373 main.go:141] libmachine: (ha-097312) DBG | Getting to WaitForSSH function...
	I0923 12:56:44.063720  682373 main.go:141] libmachine: (ha-097312) Reserved static IP address: 192.168.39.160
	I0923 12:56:44.063760  682373 main.go:141] libmachine: (ha-097312) Waiting for SSH to be available...
	I0923 12:56:44.066589  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.067094  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:minikube Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:44.067121  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.067273  682373 main.go:141] libmachine: (ha-097312) DBG | Using SSH client type: external
	I0923 12:56:44.067298  682373 main.go:141] libmachine: (ha-097312) DBG | Using SSH private key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa (-rw-------)
	I0923 12:56:44.067335  682373 main.go:141] libmachine: (ha-097312) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.160 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 12:56:44.067346  682373 main.go:141] libmachine: (ha-097312) DBG | About to run SSH command:
	I0923 12:56:44.067388  682373 main.go:141] libmachine: (ha-097312) DBG | exit 0
	I0923 12:56:44.194221  682373 main.go:141] libmachine: (ha-097312) DBG | SSH cmd err, output: <nil>: 
	I0923 12:56:44.194546  682373 main.go:141] libmachine: (ha-097312) KVM machine creation complete!
	I0923 12:56:44.194794  682373 main.go:141] libmachine: (ha-097312) Calling .GetConfigRaw
	I0923 12:56:44.195383  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:56:44.195600  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:56:44.195740  682373 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 12:56:44.195754  682373 main.go:141] libmachine: (ha-097312) Calling .GetState
	I0923 12:56:44.197002  682373 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 12:56:44.197015  682373 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 12:56:44.197021  682373 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 12:56:44.197025  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:44.200085  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.200458  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:44.200480  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.200781  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:44.201011  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:44.201209  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:44.201346  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:44.201528  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:56:44.201732  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 12:56:44.201744  682373 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 12:56:44.309556  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:56:44.309581  682373 main.go:141] libmachine: Detecting the provisioner...
	I0923 12:56:44.309589  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:44.312757  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.313154  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:44.313202  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.313393  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:44.313633  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:44.313899  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:44.314086  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:44.314302  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:56:44.314501  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 12:56:44.314513  682373 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 12:56:44.422704  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 12:56:44.422779  682373 main.go:141] libmachine: found compatible host: buildroot
	I0923 12:56:44.422786  682373 main.go:141] libmachine: Provisioning with buildroot...
	I0923 12:56:44.422796  682373 main.go:141] libmachine: (ha-097312) Calling .GetMachineName
	I0923 12:56:44.423069  682373 buildroot.go:166] provisioning hostname "ha-097312"
	I0923 12:56:44.423101  682373 main.go:141] libmachine: (ha-097312) Calling .GetMachineName
	I0923 12:56:44.423298  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:44.426419  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.426747  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:44.426769  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.426988  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:44.427186  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:44.427341  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:44.427471  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:44.427647  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:56:44.427840  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 12:56:44.427852  682373 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-097312 && echo "ha-097312" | sudo tee /etc/hostname
	I0923 12:56:44.548083  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-097312
	
	I0923 12:56:44.548119  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:44.550930  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.551237  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:44.551281  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.551446  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:44.551667  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:44.551843  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:44.551987  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:44.552153  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:56:44.552393  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 12:56:44.552421  682373 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-097312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-097312/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-097312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:56:44.667004  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:56:44.667043  682373 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19690-662205/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-662205/.minikube}
	I0923 12:56:44.667068  682373 buildroot.go:174] setting up certificates
	I0923 12:56:44.667085  682373 provision.go:84] configureAuth start
	I0923 12:56:44.667098  682373 main.go:141] libmachine: (ha-097312) Calling .GetMachineName
	I0923 12:56:44.667438  682373 main.go:141] libmachine: (ha-097312) Calling .GetIP
	I0923 12:56:44.670311  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.670792  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:44.670845  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.670910  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:44.673549  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.673871  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:44.673897  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.674038  682373 provision.go:143] copyHostCerts
	I0923 12:56:44.674077  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 12:56:44.674137  682373 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem, removing ...
	I0923 12:56:44.674159  682373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 12:56:44.674245  682373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem (1123 bytes)
	I0923 12:56:44.674380  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 12:56:44.674409  682373 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem, removing ...
	I0923 12:56:44.674417  682373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 12:56:44.674460  682373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem (1675 bytes)
	I0923 12:56:44.674580  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 12:56:44.674634  682373 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem, removing ...
	I0923 12:56:44.674642  682373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 12:56:44.674698  682373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem (1082 bytes)
	I0923 12:56:44.674832  682373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem org=jenkins.ha-097312 san=[127.0.0.1 192.168.39.160 ha-097312 localhost minikube]
	I0923 12:56:44.904863  682373 provision.go:177] copyRemoteCerts
	I0923 12:56:44.904957  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:56:44.904984  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:44.908150  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.908582  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:44.908619  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.908884  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:44.909135  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:44.909342  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:44.909527  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:56:44.992087  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 12:56:44.992199  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0923 12:56:45.016139  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 12:56:45.016229  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 12:56:45.039856  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 12:56:45.040045  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 12:56:45.063092  682373 provision.go:87] duration metric: took 395.980147ms to configureAuth
	I0923 12:56:45.063127  682373 buildroot.go:189] setting minikube options for container-runtime
	I0923 12:56:45.063302  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:56:45.063398  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:45.066695  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.067038  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:45.067071  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.067240  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:45.067488  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:45.067676  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:45.067817  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:45.068046  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:56:45.068308  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 12:56:45.068326  682373 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 12:56:45.283348  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 12:56:45.283372  682373 main.go:141] libmachine: Checking connection to Docker...
	I0923 12:56:45.283380  682373 main.go:141] libmachine: (ha-097312) Calling .GetURL
	I0923 12:56:45.284754  682373 main.go:141] libmachine: (ha-097312) DBG | Using libvirt version 6000000
	I0923 12:56:45.287147  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.287577  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:45.287606  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.287745  682373 main.go:141] libmachine: Docker is up and running!
	I0923 12:56:45.287766  682373 main.go:141] libmachine: Reticulating splines...
	I0923 12:56:45.287773  682373 client.go:171] duration metric: took 23.364255409s to LocalClient.Create
	I0923 12:56:45.287797  682373 start.go:167] duration metric: took 23.364332593s to libmachine.API.Create "ha-097312"
	I0923 12:56:45.287811  682373 start.go:293] postStartSetup for "ha-097312" (driver="kvm2")
	I0923 12:56:45.287824  682373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:56:45.287841  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:56:45.288125  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:56:45.288161  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:45.290362  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.290827  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:45.290857  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.291024  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:45.291233  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:45.291406  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:45.291630  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:56:45.376057  682373 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:56:45.380314  682373 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 12:56:45.380346  682373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/addons for local assets ...
	I0923 12:56:45.380412  682373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/files for local assets ...
	I0923 12:56:45.380483  682373 filesync.go:149] local asset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> 6694472.pem in /etc/ssl/certs
	I0923 12:56:45.380492  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> /etc/ssl/certs/6694472.pem
	I0923 12:56:45.380593  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 12:56:45.390109  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 12:56:45.414414  682373 start.go:296] duration metric: took 126.585208ms for postStartSetup
	I0923 12:56:45.414519  682373 main.go:141] libmachine: (ha-097312) Calling .GetConfigRaw
	I0923 12:56:45.415223  682373 main.go:141] libmachine: (ha-097312) Calling .GetIP
	I0923 12:56:45.418035  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.418499  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:45.418535  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.418757  682373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 12:56:45.418971  682373 start.go:128] duration metric: took 23.514676713s to createHost
	I0923 12:56:45.419008  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:45.421290  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.421582  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:45.421607  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.421739  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:45.421993  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:45.422231  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:45.422397  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:45.422624  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:56:45.422888  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 12:56:45.422913  682373 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 12:56:45.530668  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727096205.504964904
	
	I0923 12:56:45.530696  682373 fix.go:216] guest clock: 1727096205.504964904
	I0923 12:56:45.530705  682373 fix.go:229] Guest: 2024-09-23 12:56:45.504964904 +0000 UTC Remote: 2024-09-23 12:56:45.41898604 +0000 UTC m=+23.627481107 (delta=85.978864ms)
	I0923 12:56:45.530768  682373 fix.go:200] guest clock delta is within tolerance: 85.978864ms
	I0923 12:56:45.530777  682373 start.go:83] releasing machines lock for "ha-097312", held for 23.626602839s
	I0923 12:56:45.530803  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:56:45.531129  682373 main.go:141] libmachine: (ha-097312) Calling .GetIP
	I0923 12:56:45.533942  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.534282  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:45.534313  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.534510  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:56:45.535018  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:56:45.535175  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:56:45.535268  682373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 12:56:45.535329  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:45.535407  682373 ssh_runner.go:195] Run: cat /version.json
	I0923 12:56:45.535432  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:45.538344  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.538693  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:45.538718  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.538736  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.538916  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:45.539107  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:45.539142  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:45.539168  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.539301  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:45.539401  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:45.539491  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:56:45.539522  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:45.539669  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:45.539871  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:56:45.615078  682373 ssh_runner.go:195] Run: systemctl --version
	I0923 12:56:45.652339  682373 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 12:56:45.814596  682373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 12:56:45.820480  682373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 12:56:45.820567  682373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 12:56:45.837076  682373 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 12:56:45.837109  682373 start.go:495] detecting cgroup driver to use...
	I0923 12:56:45.837204  682373 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 12:56:45.852886  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:56:45.867319  682373 docker.go:217] disabling cri-docker service (if available) ...
	I0923 12:56:45.867387  682373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 12:56:45.881106  682373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 12:56:45.895047  682373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 12:56:46.010122  682373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 12:56:46.160036  682373 docker.go:233] disabling docker service ...
	I0923 12:56:46.160166  682373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 12:56:46.174281  682373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 12:56:46.187289  682373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 12:56:46.315823  682373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 12:56:46.451742  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 12:56:46.465159  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:56:46.485490  682373 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 12:56:46.485567  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:56:46.496172  682373 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 12:56:46.496276  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:56:46.506865  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:56:46.517182  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:56:46.527559  682373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 12:56:46.538362  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:56:46.548742  682373 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:56:46.565850  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:56:46.576416  682373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 12:56:46.586314  682373 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 12:56:46.586391  682373 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 12:56:46.600960  682373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 12:56:46.613686  682373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:56:46.747213  682373 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 12:56:46.833362  682373 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 12:56:46.833455  682373 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 12:56:46.838407  682373 start.go:563] Will wait 60s for crictl version
	I0923 12:56:46.838481  682373 ssh_runner.go:195] Run: which crictl
	I0923 12:56:46.842254  682373 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 12:56:46.881238  682373 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 12:56:46.881313  682373 ssh_runner.go:195] Run: crio --version
	I0923 12:56:46.910755  682373 ssh_runner.go:195] Run: crio --version
	I0923 12:56:46.941180  682373 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 12:56:46.942573  682373 main.go:141] libmachine: (ha-097312) Calling .GetIP
	I0923 12:56:46.945291  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:46.945654  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:46.945683  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:46.945901  682373 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 12:56:46.950351  682373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:56:46.963572  682373 kubeadm.go:883] updating cluster {Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 12:56:46.963689  682373 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 12:56:46.963752  682373 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 12:56:46.995863  682373 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0923 12:56:46.995949  682373 ssh_runner.go:195] Run: which lz4
	I0923 12:56:47.000077  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0923 12:56:47.000199  682373 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 12:56:47.004245  682373 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 12:56:47.004290  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0923 12:56:48.233778  682373 crio.go:462] duration metric: took 1.233615545s to copy over tarball
	I0923 12:56:48.233872  682373 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 12:56:50.293806  682373 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.059892855s)
	I0923 12:56:50.293864  682373 crio.go:469] duration metric: took 2.060053222s to extract the tarball
	I0923 12:56:50.293875  682373 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 12:56:50.330288  682373 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 12:56:50.382422  682373 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 12:56:50.382453  682373 cache_images.go:84] Images are preloaded, skipping loading
	I0923 12:56:50.382463  682373 kubeadm.go:934] updating node { 192.168.39.160 8443 v1.31.1 crio true true} ...
	I0923 12:56:50.382618  682373 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-097312 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 12:56:50.382706  682373 ssh_runner.go:195] Run: crio config
	I0923 12:56:50.429046  682373 cni.go:84] Creating CNI manager for ""
	I0923 12:56:50.429071  682373 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 12:56:50.429081  682373 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 12:56:50.429114  682373 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.160 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-097312 NodeName:ha-097312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.160"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.160 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 12:56:50.429251  682373 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.160
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-097312"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.160
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.160"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 12:56:50.429291  682373 kube-vip.go:115] generating kube-vip config ...
	I0923 12:56:50.429336  682373 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 12:56:50.447284  682373 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 12:56:50.447397  682373 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0923 12:56:50.447453  682373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 12:56:50.457555  682373 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 12:56:50.457631  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0923 12:56:50.467361  682373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0923 12:56:50.484221  682373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 12:56:50.501136  682373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0923 12:56:50.517771  682373 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0923 12:56:50.535030  682373 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0923 12:56:50.538926  682373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:56:50.550841  682373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:56:50.685055  682373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:56:50.702466  682373 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312 for IP: 192.168.39.160
	I0923 12:56:50.702500  682373 certs.go:194] generating shared ca certs ...
	I0923 12:56:50.702525  682373 certs.go:226] acquiring lock for ca certs: {Name:mk5f47b34d40554f07f6507fea971236e4735d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:56:50.702732  682373 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key
	I0923 12:56:50.702796  682373 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key
	I0923 12:56:50.702811  682373 certs.go:256] generating profile certs ...
	I0923 12:56:50.702903  682373 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.key
	I0923 12:56:50.702928  682373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.crt with IP's: []
	I0923 12:56:50.839973  682373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.crt ...
	I0923 12:56:50.840005  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.crt: {Name:mk3ec295cf75d5f37a812267f291d008d2d41849 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:56:50.840201  682373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.key ...
	I0923 12:56:50.840215  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.key: {Name:mk2a9a6301a953bccf7179cf3fcd9c6c49523a28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:56:50.840321  682373 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.3e258ae9
	I0923 12:56:50.840339  682373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.3e258ae9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.160 192.168.39.254]
	I0923 12:56:50.957561  682373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.3e258ae9 ...
	I0923 12:56:50.957598  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.3e258ae9: {Name:mke07e7dcb821169b2edcdcfe37c1283edab6d93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:56:50.957795  682373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.3e258ae9 ...
	I0923 12:56:50.957814  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.3e258ae9: {Name:mk473437de8fd0279ccc88430a74364f16849fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:56:50.957935  682373 certs.go:381] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.3e258ae9 -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt
	I0923 12:56:50.958016  682373 certs.go:385] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.3e258ae9 -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key
	I0923 12:56:50.958070  682373 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key
	I0923 12:56:50.958086  682373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt with IP's: []
	I0923 12:56:51.039985  682373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt ...
	I0923 12:56:51.040029  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt: {Name:mk08fe599b3bb9f9eafe363d4dcfa2dc4583d108 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:56:51.040291  682373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key ...
	I0923 12:56:51.040316  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key: {Name:mke55afec0b5332166375bf6241593073b8f40da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:56:51.040432  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 12:56:51.040459  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 12:56:51.040472  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 12:56:51.040484  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 12:56:51.040497  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 12:56:51.040509  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 12:56:51.040524  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 12:56:51.040539  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 12:56:51.040619  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem (1338 bytes)
	W0923 12:56:51.040660  682373 certs.go:480] ignoring /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447_empty.pem, impossibly tiny 0 bytes
	I0923 12:56:51.040672  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 12:56:51.040698  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem (1082 bytes)
	I0923 12:56:51.040726  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem (1123 bytes)
	I0923 12:56:51.040750  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem (1675 bytes)
	I0923 12:56:51.040798  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 12:56:51.040830  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem -> /usr/share/ca-certificates/669447.pem
	I0923 12:56:51.040846  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> /usr/share/ca-certificates/6694472.pem
	I0923 12:56:51.040863  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:56:51.041476  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 12:56:51.067263  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 12:56:51.091814  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 12:56:51.115009  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 12:56:51.138682  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0923 12:56:51.162647  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 12:56:51.186729  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 12:56:51.210155  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 12:56:51.233576  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem --> /usr/share/ca-certificates/669447.pem (1338 bytes)
	I0923 12:56:51.256633  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /usr/share/ca-certificates/6694472.pem (1708 bytes)
	I0923 12:56:51.279649  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 12:56:51.303438  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 12:56:51.320192  682373 ssh_runner.go:195] Run: openssl version
	I0923 12:56:51.326310  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6694472.pem && ln -fs /usr/share/ca-certificates/6694472.pem /etc/ssl/certs/6694472.pem"
	I0923 12:56:51.337813  682373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6694472.pem
	I0923 12:56:51.342410  682373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 12:47 /usr/share/ca-certificates/6694472.pem
	I0923 12:56:51.342469  682373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6694472.pem
	I0923 12:56:51.348141  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6694472.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 12:56:51.358951  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 12:56:51.369927  682373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:56:51.374498  682373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 12:28 /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:56:51.374569  682373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:56:51.380225  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 12:56:51.390788  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669447.pem && ln -fs /usr/share/ca-certificates/669447.pem /etc/ssl/certs/669447.pem"
	I0923 12:56:51.401357  682373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669447.pem
	I0923 12:56:51.405984  682373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 12:47 /usr/share/ca-certificates/669447.pem
	I0923 12:56:51.406065  682373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669447.pem
	I0923 12:56:51.411938  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/669447.pem /etc/ssl/certs/51391683.0"
	I0923 12:56:51.422798  682373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 12:56:51.426778  682373 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 12:56:51.426837  682373 kubeadm.go:392] StartCluster: {Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:56:51.426911  682373 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 12:56:51.426969  682373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 12:56:51.467074  682373 cri.go:89] found id: ""
	I0923 12:56:51.467159  682373 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 12:56:51.482686  682373 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 12:56:51.497867  682373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 12:56:51.512428  682373 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 12:56:51.512454  682373 kubeadm.go:157] found existing configuration files:
	
	I0923 12:56:51.512511  682373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 12:56:51.529985  682373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 12:56:51.530093  682373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 12:56:51.542142  682373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 12:56:51.550802  682373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 12:56:51.550892  682373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 12:56:51.560648  682373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 12:56:51.570247  682373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 12:56:51.570324  682373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 12:56:51.580148  682373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 12:56:51.589038  682373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 12:56:51.589128  682373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 12:56:51.598472  682373 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 12:56:51.709387  682373 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 12:56:51.709477  682373 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 12:56:51.804679  682373 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 12:56:51.804878  682373 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 12:56:51.805013  682373 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 12:56:51.813809  682373 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 12:56:51.816648  682373 out.go:235]   - Generating certificates and keys ...
	I0923 12:56:51.817490  682373 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 12:56:51.817573  682373 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 12:56:51.891229  682373 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 12:56:51.977862  682373 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 12:56:52.256371  682373 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 12:56:52.418600  682373 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 12:56:52.566134  682373 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 12:56:52.566417  682373 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-097312 localhost] and IPs [192.168.39.160 127.0.0.1 ::1]
	I0923 12:56:52.754339  682373 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 12:56:52.754631  682373 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-097312 localhost] and IPs [192.168.39.160 127.0.0.1 ::1]
	I0923 12:56:52.984244  682373 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 12:56:53.199395  682373 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 12:56:53.333105  682373 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 12:56:53.333280  682373 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 12:56:53.475215  682373 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 12:56:53.703024  682373 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 12:56:53.843337  682373 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 12:56:54.031020  682373 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 12:56:54.307973  682373 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 12:56:54.308522  682373 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 12:56:54.312025  682373 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 12:56:54.415301  682373 out.go:235]   - Booting up control plane ...
	I0923 12:56:54.415467  682373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 12:56:54.415596  682373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 12:56:54.415675  682373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 12:56:54.415768  682373 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 12:56:54.415870  682373 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 12:56:54.415955  682373 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 12:56:54.481155  682373 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 12:56:54.481329  682373 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 12:56:54.981948  682373 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.226424ms
	I0923 12:56:54.982063  682373 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 12:57:01.058259  682373 kubeadm.go:310] [api-check] The API server is healthy after 6.078664089s
	I0923 12:57:01.078738  682373 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 12:57:01.102575  682373 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 12:57:01.638520  682373 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 12:57:01.638793  682373 kubeadm.go:310] [mark-control-plane] Marking the node ha-097312 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 12:57:01.654796  682373 kubeadm.go:310] [bootstrap-token] Using token: tjz9o5.go3sw7ivocitep6z
	I0923 12:57:01.656792  682373 out.go:235]   - Configuring RBAC rules ...
	I0923 12:57:01.656993  682373 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 12:57:01.670875  682373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 12:57:01.681661  682373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 12:57:01.686098  682373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 12:57:01.693270  682373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 12:57:01.698752  682373 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 12:57:01.717473  682373 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 12:57:02.034772  682373 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 12:57:02.465304  682373 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 12:57:02.466345  682373 kubeadm.go:310] 
	I0923 12:57:02.466441  682373 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 12:57:02.466453  682373 kubeadm.go:310] 
	I0923 12:57:02.466593  682373 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 12:57:02.466605  682373 kubeadm.go:310] 
	I0923 12:57:02.466637  682373 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 12:57:02.466743  682373 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 12:57:02.466828  682373 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 12:57:02.466838  682373 kubeadm.go:310] 
	I0923 12:57:02.466914  682373 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 12:57:02.466921  682373 kubeadm.go:310] 
	I0923 12:57:02.466984  682373 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 12:57:02.466993  682373 kubeadm.go:310] 
	I0923 12:57:02.467078  682373 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 12:57:02.467176  682373 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 12:57:02.467278  682373 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 12:57:02.467287  682373 kubeadm.go:310] 
	I0923 12:57:02.467400  682373 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 12:57:02.467489  682373 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 12:57:02.467520  682373 kubeadm.go:310] 
	I0923 12:57:02.467645  682373 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tjz9o5.go3sw7ivocitep6z \
	I0923 12:57:02.467825  682373 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fc29dc81bde6bbaef9ddbc91342eaa216189e2d814cc53e215aada75bebb1ff \
	I0923 12:57:02.467866  682373 kubeadm.go:310] 	--control-plane 
	I0923 12:57:02.467876  682373 kubeadm.go:310] 
	I0923 12:57:02.468002  682373 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 12:57:02.468014  682373 kubeadm.go:310] 
	I0923 12:57:02.468111  682373 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tjz9o5.go3sw7ivocitep6z \
	I0923 12:57:02.468232  682373 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fc29dc81bde6bbaef9ddbc91342eaa216189e2d814cc53e215aada75bebb1ff 
	I0923 12:57:02.469853  682373 kubeadm.go:310] W0923 12:56:51.688284     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:57:02.470263  682373 kubeadm.go:310] W0923 12:56:51.689248     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:57:02.470417  682373 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 12:57:02.470437  682373 cni.go:84] Creating CNI manager for ""
	I0923 12:57:02.470446  682373 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 12:57:02.472858  682373 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0923 12:57:02.474323  682373 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0923 12:57:02.479759  682373 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0923 12:57:02.479789  682373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0923 12:57:02.504445  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0923 12:57:02.891714  682373 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 12:57:02.891813  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:57:02.891852  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-097312 minikube.k8s.io/updated_at=2024_09_23T12_57_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=ha-097312 minikube.k8s.io/primary=true
	I0923 12:57:03.052741  682373 ops.go:34] apiserver oom_adj: -16
	I0923 12:57:03.052880  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:57:03.553199  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:57:04.053904  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:57:04.553368  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:57:05.053003  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:57:05.553371  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:57:06.053924  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:57:06.553890  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:57:06.654158  682373 kubeadm.go:1113] duration metric: took 3.762424286s to wait for elevateKubeSystemPrivileges
	I0923 12:57:06.654208  682373 kubeadm.go:394] duration metric: took 15.227377014s to StartCluster
	I0923 12:57:06.654235  682373 settings.go:142] acquiring lock: {Name:mk3da09e51125fc906a9e1276ab490fc7b26b03f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:57:06.654340  682373 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 12:57:06.655289  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/kubeconfig: {Name:mk213d38080414fbe499f6509d2653fd99103348 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:57:06.655604  682373 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:57:06.655633  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 12:57:06.655653  682373 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 12:57:06.655642  682373 start.go:241] waiting for startup goroutines ...
	I0923 12:57:06.655745  682373 addons.go:69] Setting storage-provisioner=true in profile "ha-097312"
	I0923 12:57:06.655797  682373 addons.go:234] Setting addon storage-provisioner=true in "ha-097312"
	I0923 12:57:06.655834  682373 host.go:66] Checking if "ha-097312" exists ...
	I0923 12:57:06.655835  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:57:06.655752  682373 addons.go:69] Setting default-storageclass=true in profile "ha-097312"
	I0923 12:57:06.655926  682373 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-097312"
	I0923 12:57:06.656390  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:57:06.656400  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:57:06.656428  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:57:06.656430  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:57:06.672616  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43027
	I0923 12:57:06.672985  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44797
	I0923 12:57:06.673168  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:57:06.673414  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:57:06.673768  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:57:06.673789  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:57:06.673930  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:57:06.673964  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:57:06.674169  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:57:06.674315  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:57:06.674361  682373 main.go:141] libmachine: (ha-097312) Calling .GetState
	I0923 12:57:06.674868  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:57:06.674975  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:57:06.676732  682373 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 12:57:06.677135  682373 kapi.go:59] client config for ha-097312: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.crt", KeyFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.key", CAFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 12:57:06.677778  682373 cert_rotation.go:140] Starting client certificate rotation controller
	I0923 12:57:06.678102  682373 addons.go:234] Setting addon default-storageclass=true in "ha-097312"
	I0923 12:57:06.678152  682373 host.go:66] Checking if "ha-097312" exists ...
	I0923 12:57:06.678585  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:57:06.678637  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:57:06.691933  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39239
	I0923 12:57:06.692442  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:57:06.693010  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:57:06.693034  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:57:06.693367  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:57:06.693647  682373 main.go:141] libmachine: (ha-097312) Calling .GetState
	I0923 12:57:06.694766  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34185
	I0923 12:57:06.695192  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:57:06.695549  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:57:06.695721  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:57:06.695737  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:57:06.696032  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:57:06.696640  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:57:06.696692  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:57:06.698001  682373 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 12:57:06.699592  682373 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:57:06.699614  682373 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 12:57:06.699636  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:57:06.702740  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:57:06.703120  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:57:06.703136  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:57:06.703423  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:57:06.703599  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:57:06.703736  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:57:06.703871  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:57:06.713026  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44863
	I0923 12:57:06.713478  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:57:06.714138  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:57:06.714157  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:57:06.714441  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:57:06.714648  682373 main.go:141] libmachine: (ha-097312) Calling .GetState
	I0923 12:57:06.716436  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:57:06.716678  682373 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 12:57:06.716694  682373 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 12:57:06.716712  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:57:06.720029  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:57:06.720524  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:57:06.720549  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:57:06.720868  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:57:06.721094  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:57:06.721284  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:57:06.721415  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:57:06.794261  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 12:57:06.837196  682373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:57:06.948150  682373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 12:57:07.376765  682373 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0923 12:57:07.497295  682373 main.go:141] libmachine: Making call to close driver server
	I0923 12:57:07.497329  682373 main.go:141] libmachine: (ha-097312) Calling .Close
	I0923 12:57:07.497329  682373 main.go:141] libmachine: Making call to close driver server
	I0923 12:57:07.497348  682373 main.go:141] libmachine: (ha-097312) Calling .Close
	I0923 12:57:07.497659  682373 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:57:07.497676  682373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:57:07.497686  682373 main.go:141] libmachine: Making call to close driver server
	I0923 12:57:07.497695  682373 main.go:141] libmachine: (ha-097312) Calling .Close
	I0923 12:57:07.497795  682373 main.go:141] libmachine: (ha-097312) DBG | Closing plugin on server side
	I0923 12:57:07.497861  682373 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:57:07.497875  682373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:57:07.497884  682373 main.go:141] libmachine: Making call to close driver server
	I0923 12:57:07.497899  682373 main.go:141] libmachine: (ha-097312) Calling .Close
	I0923 12:57:07.497941  682373 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:57:07.497955  682373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:57:07.498024  682373 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 12:57:07.498041  682373 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 12:57:07.498159  682373 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:57:07.498194  682373 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0923 12:57:07.498211  682373 round_trippers.go:469] Request Headers:
	I0923 12:57:07.498225  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:57:07.498231  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:57:07.498235  682373 main.go:141] libmachine: (ha-097312) DBG | Closing plugin on server side
	I0923 12:57:07.498196  682373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:57:07.509952  682373 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0923 12:57:07.510797  682373 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0923 12:57:07.510817  682373 round_trippers.go:469] Request Headers:
	I0923 12:57:07.510829  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:57:07.510834  682373 round_trippers.go:473]     Content-Type: application/json
	I0923 12:57:07.510840  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:57:07.513677  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:57:07.513894  682373 main.go:141] libmachine: Making call to close driver server
	I0923 12:57:07.513920  682373 main.go:141] libmachine: (ha-097312) Calling .Close
	I0923 12:57:07.514234  682373 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:57:07.514256  682373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:57:07.516273  682373 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0923 12:57:07.517649  682373 addons.go:510] duration metric: took 861.992785ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0923 12:57:07.517685  682373 start.go:246] waiting for cluster config update ...
	I0923 12:57:07.517698  682373 start.go:255] writing updated cluster config ...
	I0923 12:57:07.519680  682373 out.go:201] 
	I0923 12:57:07.521371  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:57:07.521468  682373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 12:57:07.523127  682373 out.go:177] * Starting "ha-097312-m02" control-plane node in "ha-097312" cluster
	I0923 12:57:07.524508  682373 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 12:57:07.524539  682373 cache.go:56] Caching tarball of preloaded images
	I0923 12:57:07.524641  682373 preload.go:172] Found /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 12:57:07.524654  682373 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 12:57:07.524741  682373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 12:57:07.524952  682373 start.go:360] acquireMachinesLock for ha-097312-m02: {Name:mka98570d4b4becad22300323f1f88e64743eec3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 12:57:07.525025  682373 start.go:364] duration metric: took 44.618µs to acquireMachinesLock for "ha-097312-m02"
	I0923 12:57:07.525047  682373 start.go:93] Provisioning new machine with config: &{Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:57:07.525150  682373 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0923 12:57:07.527045  682373 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 12:57:07.527133  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:57:07.527160  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:57:07.542505  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36903
	I0923 12:57:07.542956  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:57:07.543542  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:57:07.543583  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:57:07.543972  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:57:07.544208  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetMachineName
	I0923 12:57:07.544349  682373 main.go:141] libmachine: (ha-097312-m02) Calling .DriverName
	I0923 12:57:07.544507  682373 start.go:159] libmachine.API.Create for "ha-097312" (driver="kvm2")
	I0923 12:57:07.544535  682373 client.go:168] LocalClient.Create starting
	I0923 12:57:07.544570  682373 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem
	I0923 12:57:07.544615  682373 main.go:141] libmachine: Decoding PEM data...
	I0923 12:57:07.544634  682373 main.go:141] libmachine: Parsing certificate...
	I0923 12:57:07.544717  682373 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem
	I0923 12:57:07.544765  682373 main.go:141] libmachine: Decoding PEM data...
	I0923 12:57:07.544805  682373 main.go:141] libmachine: Parsing certificate...
	I0923 12:57:07.544827  682373 main.go:141] libmachine: Running pre-create checks...
	I0923 12:57:07.544832  682373 main.go:141] libmachine: (ha-097312-m02) Calling .PreCreateCheck
	I0923 12:57:07.545067  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetConfigRaw
	I0923 12:57:07.545510  682373 main.go:141] libmachine: Creating machine...
	I0923 12:57:07.545532  682373 main.go:141] libmachine: (ha-097312-m02) Calling .Create
	I0923 12:57:07.545663  682373 main.go:141] libmachine: (ha-097312-m02) Creating KVM machine...
	I0923 12:57:07.547155  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found existing default KVM network
	I0923 12:57:07.547384  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found existing private KVM network mk-ha-097312
	I0923 12:57:07.547524  682373 main.go:141] libmachine: (ha-097312-m02) Setting up store path in /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02 ...
	I0923 12:57:07.547546  682373 main.go:141] libmachine: (ha-097312-m02) Building disk image from file:///home/jenkins/minikube-integration/19690-662205/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 12:57:07.547624  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:07.547504  682740 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:57:07.547712  682373 main.go:141] libmachine: (ha-097312-m02) Downloading /home/jenkins/minikube-integration/19690-662205/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19690-662205/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 12:57:07.802486  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:07.802340  682740 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/id_rsa...
	I0923 12:57:07.948816  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:07.948688  682740 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/ha-097312-m02.rawdisk...
	I0923 12:57:07.948868  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Writing magic tar header
	I0923 12:57:07.948878  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Writing SSH key tar header
	I0923 12:57:07.948886  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:07.948826  682740 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02 ...
	I0923 12:57:07.949014  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02
	I0923 12:57:07.949056  682373 main.go:141] libmachine: (ha-097312-m02) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02 (perms=drwx------)
	I0923 12:57:07.949066  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube/machines
	I0923 12:57:07.949084  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:57:07.949106  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205
	I0923 12:57:07.949118  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 12:57:07.949129  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Checking permissions on dir: /home/jenkins
	I0923 12:57:07.949139  682373 main.go:141] libmachine: (ha-097312-m02) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube/machines (perms=drwxr-xr-x)
	I0923 12:57:07.949156  682373 main.go:141] libmachine: (ha-097312-m02) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube (perms=drwxr-xr-x)
	I0923 12:57:07.949167  682373 main.go:141] libmachine: (ha-097312-m02) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205 (perms=drwxrwxr-x)
	I0923 12:57:07.949178  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Checking permissions on dir: /home
	I0923 12:57:07.949191  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Skipping /home - not owner
	I0923 12:57:07.949205  682373 main.go:141] libmachine: (ha-097312-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 12:57:07.949217  682373 main.go:141] libmachine: (ha-097312-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 12:57:07.949229  682373 main.go:141] libmachine: (ha-097312-m02) Creating domain...
	I0923 12:57:07.950603  682373 main.go:141] libmachine: (ha-097312-m02) define libvirt domain using xml: 
	I0923 12:57:07.950628  682373 main.go:141] libmachine: (ha-097312-m02) <domain type='kvm'>
	I0923 12:57:07.950638  682373 main.go:141] libmachine: (ha-097312-m02)   <name>ha-097312-m02</name>
	I0923 12:57:07.950648  682373 main.go:141] libmachine: (ha-097312-m02)   <memory unit='MiB'>2200</memory>
	I0923 12:57:07.950655  682373 main.go:141] libmachine: (ha-097312-m02)   <vcpu>2</vcpu>
	I0923 12:57:07.950665  682373 main.go:141] libmachine: (ha-097312-m02)   <features>
	I0923 12:57:07.950672  682373 main.go:141] libmachine: (ha-097312-m02)     <acpi/>
	I0923 12:57:07.950678  682373 main.go:141] libmachine: (ha-097312-m02)     <apic/>
	I0923 12:57:07.950685  682373 main.go:141] libmachine: (ha-097312-m02)     <pae/>
	I0923 12:57:07.950692  682373 main.go:141] libmachine: (ha-097312-m02)     
	I0923 12:57:07.950704  682373 main.go:141] libmachine: (ha-097312-m02)   </features>
	I0923 12:57:07.950712  682373 main.go:141] libmachine: (ha-097312-m02)   <cpu mode='host-passthrough'>
	I0923 12:57:07.950720  682373 main.go:141] libmachine: (ha-097312-m02)   
	I0923 12:57:07.950726  682373 main.go:141] libmachine: (ha-097312-m02)   </cpu>
	I0923 12:57:07.950755  682373 main.go:141] libmachine: (ha-097312-m02)   <os>
	I0923 12:57:07.950767  682373 main.go:141] libmachine: (ha-097312-m02)     <type>hvm</type>
	I0923 12:57:07.950775  682373 main.go:141] libmachine: (ha-097312-m02)     <boot dev='cdrom'/>
	I0923 12:57:07.950783  682373 main.go:141] libmachine: (ha-097312-m02)     <boot dev='hd'/>
	I0923 12:57:07.950795  682373 main.go:141] libmachine: (ha-097312-m02)     <bootmenu enable='no'/>
	I0923 12:57:07.950802  682373 main.go:141] libmachine: (ha-097312-m02)   </os>
	I0923 12:57:07.950814  682373 main.go:141] libmachine: (ha-097312-m02)   <devices>
	I0923 12:57:07.950825  682373 main.go:141] libmachine: (ha-097312-m02)     <disk type='file' device='cdrom'>
	I0923 12:57:07.950841  682373 main.go:141] libmachine: (ha-097312-m02)       <source file='/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/boot2docker.iso'/>
	I0923 12:57:07.950853  682373 main.go:141] libmachine: (ha-097312-m02)       <target dev='hdc' bus='scsi'/>
	I0923 12:57:07.950887  682373 main.go:141] libmachine: (ha-097312-m02)       <readonly/>
	I0923 12:57:07.950906  682373 main.go:141] libmachine: (ha-097312-m02)     </disk>
	I0923 12:57:07.950914  682373 main.go:141] libmachine: (ha-097312-m02)     <disk type='file' device='disk'>
	I0923 12:57:07.950920  682373 main.go:141] libmachine: (ha-097312-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 12:57:07.950931  682373 main.go:141] libmachine: (ha-097312-m02)       <source file='/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/ha-097312-m02.rawdisk'/>
	I0923 12:57:07.950938  682373 main.go:141] libmachine: (ha-097312-m02)       <target dev='hda' bus='virtio'/>
	I0923 12:57:07.950943  682373 main.go:141] libmachine: (ha-097312-m02)     </disk>
	I0923 12:57:07.950950  682373 main.go:141] libmachine: (ha-097312-m02)     <interface type='network'>
	I0923 12:57:07.950956  682373 main.go:141] libmachine: (ha-097312-m02)       <source network='mk-ha-097312'/>
	I0923 12:57:07.950962  682373 main.go:141] libmachine: (ha-097312-m02)       <model type='virtio'/>
	I0923 12:57:07.950967  682373 main.go:141] libmachine: (ha-097312-m02)     </interface>
	I0923 12:57:07.950973  682373 main.go:141] libmachine: (ha-097312-m02)     <interface type='network'>
	I0923 12:57:07.950979  682373 main.go:141] libmachine: (ha-097312-m02)       <source network='default'/>
	I0923 12:57:07.950988  682373 main.go:141] libmachine: (ha-097312-m02)       <model type='virtio'/>
	I0923 12:57:07.951022  682373 main.go:141] libmachine: (ha-097312-m02)     </interface>
	I0923 12:57:07.951047  682373 main.go:141] libmachine: (ha-097312-m02)     <serial type='pty'>
	I0923 12:57:07.951056  682373 main.go:141] libmachine: (ha-097312-m02)       <target port='0'/>
	I0923 12:57:07.951071  682373 main.go:141] libmachine: (ha-097312-m02)     </serial>
	I0923 12:57:07.951083  682373 main.go:141] libmachine: (ha-097312-m02)     <console type='pty'>
	I0923 12:57:07.951094  682373 main.go:141] libmachine: (ha-097312-m02)       <target type='serial' port='0'/>
	I0923 12:57:07.951104  682373 main.go:141] libmachine: (ha-097312-m02)     </console>
	I0923 12:57:07.951110  682373 main.go:141] libmachine: (ha-097312-m02)     <rng model='virtio'>
	I0923 12:57:07.951122  682373 main.go:141] libmachine: (ha-097312-m02)       <backend model='random'>/dev/random</backend>
	I0923 12:57:07.951132  682373 main.go:141] libmachine: (ha-097312-m02)     </rng>
	I0923 12:57:07.951139  682373 main.go:141] libmachine: (ha-097312-m02)     
	I0923 12:57:07.951147  682373 main.go:141] libmachine: (ha-097312-m02)     
	I0923 12:57:07.951155  682373 main.go:141] libmachine: (ha-097312-m02)   </devices>
	I0923 12:57:07.951170  682373 main.go:141] libmachine: (ha-097312-m02) </domain>
	I0923 12:57:07.951208  682373 main.go:141] libmachine: (ha-097312-m02) 
	I0923 12:57:07.958737  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:28:cf:23 in network default
	I0923 12:57:07.959212  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:07.959260  682373 main.go:141] libmachine: (ha-097312-m02) Ensuring networks are active...
	I0923 12:57:07.960010  682373 main.go:141] libmachine: (ha-097312-m02) Ensuring network default is active
	I0923 12:57:07.960399  682373 main.go:141] libmachine: (ha-097312-m02) Ensuring network mk-ha-097312 is active
	I0923 12:57:07.960872  682373 main.go:141] libmachine: (ha-097312-m02) Getting domain xml...
	I0923 12:57:07.961596  682373 main.go:141] libmachine: (ha-097312-m02) Creating domain...
	I0923 12:57:09.236958  682373 main.go:141] libmachine: (ha-097312-m02) Waiting to get IP...
	I0923 12:57:09.237872  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:09.238432  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:09.238520  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:09.238409  682740 retry.go:31] will retry after 258.996903ms: waiting for machine to come up
	I0923 12:57:09.498848  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:09.499271  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:09.499300  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:09.499216  682740 retry.go:31] will retry after 390.01253ms: waiting for machine to come up
	I0923 12:57:09.890994  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:09.891540  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:09.891572  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:09.891465  682740 retry.go:31] will retry after 371.935324ms: waiting for machine to come up
	I0923 12:57:10.265244  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:10.265618  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:10.265655  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:10.265585  682740 retry.go:31] will retry after 510.543016ms: waiting for machine to come up
	I0923 12:57:10.777241  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:10.777723  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:10.777746  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:10.777656  682740 retry.go:31] will retry after 522.337855ms: waiting for machine to come up
	I0923 12:57:11.302530  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:11.303002  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:11.303023  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:11.302970  682740 retry.go:31] will retry after 745.395576ms: waiting for machine to come up
	I0923 12:57:12.049866  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:12.050223  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:12.050249  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:12.050180  682740 retry.go:31] will retry after 791.252666ms: waiting for machine to come up
	I0923 12:57:12.842707  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:12.843212  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:12.843250  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:12.843171  682740 retry.go:31] will retry after 1.03083414s: waiting for machine to come up
	I0923 12:57:13.876177  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:13.876677  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:13.876711  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:13.876621  682740 retry.go:31] will retry after 1.686909518s: waiting for machine to come up
	I0923 12:57:15.565124  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:15.565550  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:15.565574  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:15.565500  682740 retry.go:31] will retry after 1.944756654s: waiting for machine to come up
	I0923 12:57:17.512182  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:17.512709  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:17.512742  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:17.512627  682740 retry.go:31] will retry after 2.056101086s: waiting for machine to come up
	I0923 12:57:19.569989  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:19.570397  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:19.570422  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:19.570360  682740 retry.go:31] will retry after 2.406826762s: waiting for machine to come up
	I0923 12:57:21.980169  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:21.980856  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:21.980887  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:21.980793  682740 retry.go:31] will retry after 3.38134268s: waiting for machine to come up
	I0923 12:57:25.364366  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:25.364892  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:25.364919  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:25.364848  682740 retry.go:31] will retry after 4.745352265s: waiting for machine to come up
	I0923 12:57:30.113738  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.114252  682373 main.go:141] libmachine: (ha-097312-m02) Found IP for machine: 192.168.39.214
	I0923 12:57:30.114286  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has current primary IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.114295  682373 main.go:141] libmachine: (ha-097312-m02) Reserving static IP address...
	I0923 12:57:30.114645  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find host DHCP lease matching {name: "ha-097312-m02", mac: "52:54:00:aa:9c:e4", ip: "192.168.39.214"} in network mk-ha-097312
	I0923 12:57:30.195004  682373 main.go:141] libmachine: (ha-097312-m02) Reserved static IP address: 192.168.39.214
	I0923 12:57:30.195029  682373 main.go:141] libmachine: (ha-097312-m02) Waiting for SSH to be available...
	I0923 12:57:30.195051  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Getting to WaitForSSH function...
	I0923 12:57:30.198064  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.198485  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:minikube Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:30.198516  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.198655  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Using SSH client type: external
	I0923 12:57:30.198683  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/id_rsa (-rw-------)
	I0923 12:57:30.198704  682373 main.go:141] libmachine: (ha-097312-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 12:57:30.198716  682373 main.go:141] libmachine: (ha-097312-m02) DBG | About to run SSH command:
	I0923 12:57:30.198732  682373 main.go:141] libmachine: (ha-097312-m02) DBG | exit 0
	I0923 12:57:30.322102  682373 main.go:141] libmachine: (ha-097312-m02) DBG | SSH cmd err, output: <nil>: 
	I0923 12:57:30.322535  682373 main.go:141] libmachine: (ha-097312-m02) KVM machine creation complete!
	I0923 12:57:30.322889  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetConfigRaw
	I0923 12:57:30.324198  682373 main.go:141] libmachine: (ha-097312-m02) Calling .DriverName
	I0923 12:57:30.325129  682373 main.go:141] libmachine: (ha-097312-m02) Calling .DriverName
	I0923 12:57:30.325321  682373 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 12:57:30.325347  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetState
	I0923 12:57:30.327097  682373 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 12:57:30.327120  682373 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 12:57:30.327127  682373 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 12:57:30.327136  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:30.330398  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.330831  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:30.330856  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.331084  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:30.331333  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:30.331567  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:30.331779  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:30.331980  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:57:30.332285  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0923 12:57:30.332308  682373 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 12:57:30.433384  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:57:30.433417  682373 main.go:141] libmachine: Detecting the provisioner...
	I0923 12:57:30.433425  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:30.436332  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.436753  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:30.436787  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.436960  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:30.437226  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:30.437407  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:30.437534  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:30.437680  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:57:30.437907  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0923 12:57:30.437921  682373 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 12:57:30.542610  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 12:57:30.542690  682373 main.go:141] libmachine: found compatible host: buildroot
	I0923 12:57:30.542698  682373 main.go:141] libmachine: Provisioning with buildroot...
	I0923 12:57:30.542708  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetMachineName
	I0923 12:57:30.543041  682373 buildroot.go:166] provisioning hostname "ha-097312-m02"
	I0923 12:57:30.543071  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetMachineName
	I0923 12:57:30.543236  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:30.546448  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.546897  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:30.546919  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.547099  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:30.547300  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:30.547478  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:30.547640  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:30.547814  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:57:30.548056  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0923 12:57:30.548076  682373 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-097312-m02 && echo "ha-097312-m02" | sudo tee /etc/hostname
	I0923 12:57:30.664801  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-097312-m02
	
	I0923 12:57:30.664827  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:30.668130  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.668523  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:30.668560  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.668734  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:30.668953  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:30.669161  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:30.669310  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:30.669479  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:57:30.669670  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0923 12:57:30.669692  682373 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-097312-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-097312-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-097312-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:57:30.782645  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:57:30.782678  682373 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19690-662205/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-662205/.minikube}
	I0923 12:57:30.782699  682373 buildroot.go:174] setting up certificates
	I0923 12:57:30.782714  682373 provision.go:84] configureAuth start
	I0923 12:57:30.782725  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetMachineName
	I0923 12:57:30.783040  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetIP
	I0923 12:57:30.785945  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.786433  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:30.786470  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.786603  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:30.788815  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.789202  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:30.789235  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.789394  682373 provision.go:143] copyHostCerts
	I0923 12:57:30.789433  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 12:57:30.789475  682373 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem, removing ...
	I0923 12:57:30.789485  682373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 12:57:30.789576  682373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem (1082 bytes)
	I0923 12:57:30.789670  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 12:57:30.789696  682373 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem, removing ...
	I0923 12:57:30.789707  682373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 12:57:30.789745  682373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem (1123 bytes)
	I0923 12:57:30.789814  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 12:57:30.789859  682373 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem, removing ...
	I0923 12:57:30.789868  682373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 12:57:30.789903  682373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem (1675 bytes)
	I0923 12:57:30.789977  682373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem org=jenkins.ha-097312-m02 san=[127.0.0.1 192.168.39.214 ha-097312-m02 localhost minikube]
	I0923 12:57:30.922412  682373 provision.go:177] copyRemoteCerts
	I0923 12:57:30.922481  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:57:30.922511  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:30.925683  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.926050  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:30.926084  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.926274  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:30.926483  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:30.926675  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:30.926797  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/id_rsa Username:docker}
	I0923 12:57:31.008599  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 12:57:31.008683  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 12:57:31.033933  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 12:57:31.034023  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 12:57:31.058490  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 12:57:31.058585  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 12:57:31.083172  682373 provision.go:87] duration metric: took 300.435238ms to configureAuth
	I0923 12:57:31.083208  682373 buildroot.go:189] setting minikube options for container-runtime
	I0923 12:57:31.083452  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:57:31.083557  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:31.086620  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.087006  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:31.087040  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.087226  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:31.087462  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:31.087673  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:31.087823  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:31.088047  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:57:31.088262  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0923 12:57:31.088294  682373 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 12:57:31.308105  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 12:57:31.308130  682373 main.go:141] libmachine: Checking connection to Docker...
	I0923 12:57:31.308138  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetURL
	I0923 12:57:31.309535  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Using libvirt version 6000000
	I0923 12:57:31.312541  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.312973  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:31.313010  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.313204  682373 main.go:141] libmachine: Docker is up and running!
	I0923 12:57:31.313219  682373 main.go:141] libmachine: Reticulating splines...
	I0923 12:57:31.313229  682373 client.go:171] duration metric: took 23.76868403s to LocalClient.Create
	I0923 12:57:31.313256  682373 start.go:167] duration metric: took 23.768751533s to libmachine.API.Create "ha-097312"
	I0923 12:57:31.313265  682373 start.go:293] postStartSetup for "ha-097312-m02" (driver="kvm2")
	I0923 12:57:31.313279  682373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:57:31.313296  682373 main.go:141] libmachine: (ha-097312-m02) Calling .DriverName
	I0923 12:57:31.313570  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:57:31.313596  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:31.315984  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.316386  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:31.316408  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.316617  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:31.316830  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:31.316990  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:31.317121  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/id_rsa Username:docker}
	I0923 12:57:31.400827  682373 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:57:31.404978  682373 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 12:57:31.405008  682373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/addons for local assets ...
	I0923 12:57:31.405090  682373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/files for local assets ...
	I0923 12:57:31.405188  682373 filesync.go:149] local asset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> 6694472.pem in /etc/ssl/certs
	I0923 12:57:31.405202  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> /etc/ssl/certs/6694472.pem
	I0923 12:57:31.405345  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 12:57:31.415010  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 12:57:31.439229  682373 start.go:296] duration metric: took 125.945282ms for postStartSetup
	I0923 12:57:31.439312  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetConfigRaw
	I0923 12:57:31.439949  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetIP
	I0923 12:57:31.442989  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.443357  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:31.443391  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.443654  682373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 12:57:31.443870  682373 start.go:128] duration metric: took 23.918708009s to createHost
	I0923 12:57:31.443895  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:31.446222  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.446579  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:31.446608  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.446760  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:31.446969  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:31.447132  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:31.447282  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:31.447456  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:57:31.447638  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0923 12:57:31.447648  682373 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 12:57:31.550685  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727096251.508834892
	
	I0923 12:57:31.550719  682373 fix.go:216] guest clock: 1727096251.508834892
	I0923 12:57:31.550731  682373 fix.go:229] Guest: 2024-09-23 12:57:31.508834892 +0000 UTC Remote: 2024-09-23 12:57:31.443883765 +0000 UTC m=+69.652378832 (delta=64.951127ms)
	I0923 12:57:31.550757  682373 fix.go:200] guest clock delta is within tolerance: 64.951127ms
	I0923 12:57:31.550765  682373 start.go:83] releasing machines lock for "ha-097312-m02", held for 24.025730497s
	I0923 12:57:31.550798  682373 main.go:141] libmachine: (ha-097312-m02) Calling .DriverName
	I0923 12:57:31.551124  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetIP
	I0923 12:57:31.554365  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.554798  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:31.554829  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.557342  682373 out.go:177] * Found network options:
	I0923 12:57:31.558765  682373 out.go:177]   - NO_PROXY=192.168.39.160
	W0923 12:57:31.560271  682373 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 12:57:31.560309  682373 main.go:141] libmachine: (ha-097312-m02) Calling .DriverName
	I0923 12:57:31.561020  682373 main.go:141] libmachine: (ha-097312-m02) Calling .DriverName
	I0923 12:57:31.561228  682373 main.go:141] libmachine: (ha-097312-m02) Calling .DriverName
	I0923 12:57:31.561372  682373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 12:57:31.561417  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	W0923 12:57:31.561455  682373 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 12:57:31.561533  682373 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 12:57:31.561554  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:31.564108  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.564231  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.564516  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:31.564549  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.564574  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:31.564586  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.564758  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:31.564856  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:31.564956  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:31.565019  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:31.565102  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:31.565177  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:31.565238  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/id_rsa Username:docker}
	I0923 12:57:31.565280  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/id_rsa Username:docker}
	I0923 12:57:31.802089  682373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 12:57:31.808543  682373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 12:57:31.808622  682373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 12:57:31.824457  682373 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 12:57:31.824502  682373 start.go:495] detecting cgroup driver to use...
	I0923 12:57:31.824591  682373 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 12:57:31.842591  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:57:31.857349  682373 docker.go:217] disabling cri-docker service (if available) ...
	I0923 12:57:31.857432  682373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 12:57:31.871118  682373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 12:57:31.884433  682373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 12:57:31.998506  682373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 12:57:32.140771  682373 docker.go:233] disabling docker service ...
	I0923 12:57:32.140848  682373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 12:57:32.154917  682373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 12:57:32.167722  682373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 12:57:32.306721  682373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 12:57:32.442305  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 12:57:32.455563  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:57:32.473584  682373 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 12:57:32.473664  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:57:32.483856  682373 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 12:57:32.483926  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:57:32.493889  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:57:32.503832  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:57:32.514226  682373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 12:57:32.524620  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:57:32.534430  682373 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:57:32.550444  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:57:32.560917  682373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 12:57:32.570816  682373 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 12:57:32.570878  682373 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 12:57:32.583098  682373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 12:57:32.592948  682373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:57:32.720270  682373 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 12:57:32.812338  682373 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 12:57:32.812420  682373 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 12:57:32.817090  682373 start.go:563] Will wait 60s for crictl version
	I0923 12:57:32.817148  682373 ssh_runner.go:195] Run: which crictl
	I0923 12:57:32.820890  682373 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 12:57:32.862384  682373 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 12:57:32.862475  682373 ssh_runner.go:195] Run: crio --version
	I0923 12:57:32.889442  682373 ssh_runner.go:195] Run: crio --version
	I0923 12:57:32.919399  682373 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 12:57:32.921499  682373 out.go:177]   - env NO_PROXY=192.168.39.160
	I0923 12:57:32.923091  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetIP
	I0923 12:57:32.926243  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:32.926570  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:32.926593  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:32.926824  682373 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 12:57:32.930826  682373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:57:32.942746  682373 mustload.go:65] Loading cluster: ha-097312
	I0923 12:57:32.942993  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:57:32.943344  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:57:32.943396  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:57:32.959345  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45567
	I0923 12:57:32.959837  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:57:32.960440  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:57:32.960462  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:57:32.960839  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:57:32.961073  682373 main.go:141] libmachine: (ha-097312) Calling .GetState
	I0923 12:57:32.962981  682373 host.go:66] Checking if "ha-097312" exists ...
	I0923 12:57:32.963304  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:57:32.963359  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:57:32.979062  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33299
	I0923 12:57:32.979655  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:57:32.980147  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:57:32.980171  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:57:32.980553  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:57:32.980783  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:57:32.980997  682373 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312 for IP: 192.168.39.214
	I0923 12:57:32.981024  682373 certs.go:194] generating shared ca certs ...
	I0923 12:57:32.981042  682373 certs.go:226] acquiring lock for ca certs: {Name:mk5f47b34d40554f07f6507fea971236e4735d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:57:32.981215  682373 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key
	I0923 12:57:32.981259  682373 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key
	I0923 12:57:32.981266  682373 certs.go:256] generating profile certs ...
	I0923 12:57:32.981360  682373 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.key
	I0923 12:57:32.981395  682373 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.61cdc51f
	I0923 12:57:32.981420  682373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.61cdc51f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.160 192.168.39.214 192.168.39.254]
	I0923 12:57:33.071795  682373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.61cdc51f ...
	I0923 12:57:33.071829  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.61cdc51f: {Name:mk62bd79cb1d47d4e42d7ff40584a205e823ac92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:57:33.072049  682373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.61cdc51f ...
	I0923 12:57:33.072069  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.61cdc51f: {Name:mk7d02454991cfe0917d276979b247a33b0bbebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:57:33.072179  682373 certs.go:381] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.61cdc51f -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt
	I0923 12:57:33.072334  682373 certs.go:385] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.61cdc51f -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key
	I0923 12:57:33.072469  682373 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key
	I0923 12:57:33.072488  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 12:57:33.072504  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 12:57:33.072515  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 12:57:33.072525  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 12:57:33.072541  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 12:57:33.072553  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 12:57:33.072563  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 12:57:33.072575  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 12:57:33.072624  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem (1338 bytes)
	W0923 12:57:33.072650  682373 certs.go:480] ignoring /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447_empty.pem, impossibly tiny 0 bytes
	I0923 12:57:33.072659  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 12:57:33.072682  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem (1082 bytes)
	I0923 12:57:33.072703  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem (1123 bytes)
	I0923 12:57:33.072727  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem (1675 bytes)
	I0923 12:57:33.072766  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 12:57:33.072809  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> /usr/share/ca-certificates/6694472.pem
	I0923 12:57:33.072831  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:57:33.072841  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem -> /usr/share/ca-certificates/669447.pem
	I0923 12:57:33.072884  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:57:33.076209  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:57:33.076612  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:57:33.076643  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:57:33.076790  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:57:33.077013  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:57:33.077175  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:57:33.077328  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:57:33.154333  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0923 12:57:33.159047  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0923 12:57:33.170550  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0923 12:57:33.175236  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0923 12:57:33.186589  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0923 12:57:33.192195  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0923 12:57:33.206938  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0923 12:57:33.211432  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0923 12:57:33.222459  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0923 12:57:33.226550  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0923 12:57:33.237861  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0923 12:57:33.242413  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1671 bytes)
	I0923 12:57:33.252582  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 12:57:33.276338  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 12:57:33.301928  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 12:57:33.327107  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 12:57:33.353167  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0923 12:57:33.377281  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 12:57:33.401324  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 12:57:33.426736  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 12:57:33.451659  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /usr/share/ca-certificates/6694472.pem (1708 bytes)
	I0923 12:57:33.475444  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 12:57:33.500205  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem --> /usr/share/ca-certificates/669447.pem (1338 bytes)
	I0923 12:57:33.524995  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0923 12:57:33.542090  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0923 12:57:33.558637  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0923 12:57:33.577724  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0923 12:57:33.595235  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0923 12:57:33.613246  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1671 bytes)
	I0923 12:57:33.629756  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0923 12:57:33.646976  682373 ssh_runner.go:195] Run: openssl version
	I0923 12:57:33.652839  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 12:57:33.665921  682373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:57:33.671324  682373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 12:28 /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:57:33.671395  682373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:57:33.677752  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 12:57:33.688883  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669447.pem && ln -fs /usr/share/ca-certificates/669447.pem /etc/ssl/certs/669447.pem"
	I0923 12:57:33.699858  682373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669447.pem
	I0923 12:57:33.704184  682373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 12:47 /usr/share/ca-certificates/669447.pem
	I0923 12:57:33.704258  682373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669447.pem
	I0923 12:57:33.709888  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/669447.pem /etc/ssl/certs/51391683.0"
	I0923 12:57:33.720601  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6694472.pem && ln -fs /usr/share/ca-certificates/6694472.pem /etc/ssl/certs/6694472.pem"
	I0923 12:57:33.731770  682373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6694472.pem
	I0923 12:57:33.736581  682373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 12:47 /usr/share/ca-certificates/6694472.pem
	I0923 12:57:33.736662  682373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6694472.pem
	I0923 12:57:33.742744  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6694472.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 12:57:33.754098  682373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 12:57:33.758320  682373 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 12:57:33.758398  682373 kubeadm.go:934] updating node {m02 192.168.39.214 8443 v1.31.1 crio true true} ...
	I0923 12:57:33.758510  682373 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-097312-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 12:57:33.758543  682373 kube-vip.go:115] generating kube-vip config ...
	I0923 12:57:33.758604  682373 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 12:57:33.773852  682373 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 12:57:33.773946  682373 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0923 12:57:33.774016  682373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 12:57:33.784005  682373 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0923 12:57:33.784077  682373 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0923 12:57:33.795537  682373 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0923 12:57:33.795576  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 12:57:33.795628  682373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 12:57:33.795645  682373 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0923 12:57:33.795645  682373 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0923 12:57:33.800211  682373 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0923 12:57:33.800250  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0923 12:57:34.690726  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 12:57:34.690835  682373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 12:57:34.695973  682373 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0923 12:57:34.696015  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0923 12:57:34.821772  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:57:34.859449  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 12:57:34.859576  682373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 12:57:34.865043  682373 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0923 12:57:34.865081  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0923 12:57:35.467374  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0923 12:57:35.477615  682373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0923 12:57:35.494947  682373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 12:57:35.511461  682373 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0923 12:57:35.528089  682373 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0923 12:57:35.532321  682373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:57:35.545355  682373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:57:35.675932  682373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:57:35.693246  682373 host.go:66] Checking if "ha-097312" exists ...
	I0923 12:57:35.693787  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:57:35.693897  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:57:35.709354  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42031
	I0923 12:57:35.709824  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:57:35.710378  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:57:35.710405  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:57:35.710810  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:57:35.711063  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:57:35.711227  682373 start.go:317] joinCluster: &{Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:57:35.711360  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0923 12:57:35.711378  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:57:35.714477  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:57:35.714953  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:57:35.714989  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:57:35.715229  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:57:35.715442  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:57:35.715639  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:57:35.715775  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:57:35.872553  682373 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:57:35.872604  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xyxxia.g4s5n9l2o4j0fmlt --discovery-token-ca-cert-hash sha256:3fc29dc81bde6bbaef9ddbc91342eaa216189e2d814cc53e215aada75bebb1ff --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-097312-m02 --control-plane --apiserver-advertise-address=192.168.39.214 --apiserver-bind-port=8443"
	I0923 12:57:59.258533  682373 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xyxxia.g4s5n9l2o4j0fmlt --discovery-token-ca-cert-hash sha256:3fc29dc81bde6bbaef9ddbc91342eaa216189e2d814cc53e215aada75bebb1ff --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-097312-m02 --control-plane --apiserver-advertise-address=192.168.39.214 --apiserver-bind-port=8443": (23.385898049s)
	I0923 12:57:59.258586  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0923 12:57:59.796861  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-097312-m02 minikube.k8s.io/updated_at=2024_09_23T12_57_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=ha-097312 minikube.k8s.io/primary=false
	I0923 12:57:59.924798  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-097312-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0923 12:58:00.039331  682373 start.go:319] duration metric: took 24.32808596s to joinCluster
	I0923 12:58:00.039429  682373 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:58:00.039711  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:58:00.041025  682373 out.go:177] * Verifying Kubernetes components...
	I0923 12:58:00.042555  682373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:58:00.236705  682373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:58:00.254117  682373 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 12:58:00.254361  682373 kapi.go:59] client config for ha-097312: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.crt", KeyFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.key", CAFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0923 12:58:00.254428  682373 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.160:8443
	I0923 12:58:00.254651  682373 node_ready.go:35] waiting up to 6m0s for node "ha-097312-m02" to be "Ready" ...
	I0923 12:58:00.254771  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:00.254779  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:00.254788  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:00.254792  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:00.285534  682373 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I0923 12:58:00.755122  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:00.755151  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:00.755162  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:00.755168  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:00.759795  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:01.254994  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:01.255020  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:01.255029  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:01.255034  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:01.269257  682373 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0923 12:58:01.755083  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:01.755109  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:01.755117  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:01.755121  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:01.759623  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:02.255610  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:02.255632  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:02.255641  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:02.255645  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:02.259196  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:02.259691  682373 node_ready.go:53] node "ha-097312-m02" has status "Ready":"False"
	I0923 12:58:02.755738  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:02.755768  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:02.755777  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:02.755781  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:02.759269  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:03.255079  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:03.255106  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:03.255115  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:03.255120  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:03.259155  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:03.755217  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:03.755244  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:03.755251  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:03.755255  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:03.759086  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:04.255149  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:04.255177  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:04.255187  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:04.255193  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:04.259605  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:04.260038  682373 node_ready.go:53] node "ha-097312-m02" has status "Ready":"False"
	I0923 12:58:04.755404  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:04.755434  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:04.755446  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:04.755452  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:04.762670  682373 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:58:05.255127  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:05.255157  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:05.255166  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:05.255172  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:05.259007  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:05.755425  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:05.755458  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:05.755470  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:05.755475  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:05.759105  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:06.255090  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:06.255119  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:06.255128  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:06.255134  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:06.259815  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:06.260439  682373 node_ready.go:53] node "ha-097312-m02" has status "Ready":"False"
	I0923 12:58:06.755181  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:06.755209  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:06.755219  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:06.755226  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:06.758768  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:07.255412  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:07.255447  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:07.255458  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:07.255466  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:07.258578  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:07.755939  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:07.755966  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:07.755975  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:07.755978  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:07.759564  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:08.255677  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:08.255716  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:08.255730  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:08.255735  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:08.259088  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:08.754970  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:08.755000  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:08.755012  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:08.755020  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:08.758314  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:08.758910  682373 node_ready.go:53] node "ha-097312-m02" has status "Ready":"False"
	I0923 12:58:09.256074  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:09.256105  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:09.256115  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:09.256120  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:09.259267  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:09.754981  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:09.755005  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:09.755014  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:09.755019  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:09.758517  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:10.255140  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:10.255164  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:10.255173  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:10.255178  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:10.261151  682373 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:58:10.755682  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:10.755711  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:10.755722  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:10.755728  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:10.759364  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:10.759961  682373 node_ready.go:53] node "ha-097312-m02" has status "Ready":"False"
	I0923 12:58:11.255328  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:11.255355  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:11.255363  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:11.255367  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:11.259613  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:11.755288  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:11.755316  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:11.755331  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:11.755336  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:11.759266  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:12.255138  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:12.255270  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:12.255308  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:12.255317  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:12.259134  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:12.755572  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:12.755596  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:12.755604  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:12.755610  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:12.758861  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:13.255907  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:13.255934  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:13.255942  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:13.255946  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:13.259259  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:13.259818  682373 node_ready.go:53] node "ha-097312-m02" has status "Ready":"False"
	I0923 12:58:13.755217  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:13.755243  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:13.755251  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:13.755255  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:13.759226  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:14.255176  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:14.255208  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:14.255219  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:14.255226  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:14.258744  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:14.755918  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:14.755946  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:14.755953  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:14.755957  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:14.759652  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:15.255703  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:15.255732  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:15.255745  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:15.255754  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:15.259193  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:15.755854  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:15.755888  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:15.755896  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:15.755900  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:15.759137  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:15.759696  682373 node_ready.go:53] node "ha-097312-m02" has status "Ready":"False"
	I0923 12:58:16.255882  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:16.255910  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:16.255918  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:16.255922  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:16.259597  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:16.755835  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:16.755869  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:16.755887  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:16.755896  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:16.759860  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:17.255730  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:17.255754  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:17.255769  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:17.255773  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:17.259628  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:17.755085  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:17.755111  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:17.755119  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:17.755124  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:17.759249  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:17.759743  682373 node_ready.go:53] node "ha-097312-m02" has status "Ready":"False"
	I0923 12:58:18.255184  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:18.255211  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.255225  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.255242  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.259648  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:18.754896  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:18.754921  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.754930  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.754935  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.759143  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:18.759759  682373 node_ready.go:49] node "ha-097312-m02" has status "Ready":"True"
	I0923 12:58:18.759779  682373 node_ready.go:38] duration metric: took 18.505092333s for node "ha-097312-m02" to be "Ready" ...
	I0923 12:58:18.759789  682373 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:58:18.759872  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:58:18.759882  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.759890  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.759895  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.765186  682373 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:58:18.771234  682373 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6g9x2" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:18.771365  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6g9x2
	I0923 12:58:18.771376  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.771387  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.771396  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.775100  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:18.775960  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:18.775983  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.775993  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.776003  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.779024  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:18.779526  682373 pod_ready.go:93] pod "coredns-7c65d6cfc9-6g9x2" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:18.779547  682373 pod_ready.go:82] duration metric: took 8.277628ms for pod "coredns-7c65d6cfc9-6g9x2" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:18.779561  682373 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-txcxz" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:18.779632  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-txcxz
	I0923 12:58:18.779642  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.779652  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.779659  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.782895  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:18.783552  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:18.783573  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.783582  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.783588  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.786568  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:58:18.787170  682373 pod_ready.go:93] pod "coredns-7c65d6cfc9-txcxz" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:18.787189  682373 pod_ready.go:82] duration metric: took 7.619712ms for pod "coredns-7c65d6cfc9-txcxz" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:18.787202  682373 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:18.787274  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/etcd-ha-097312
	I0923 12:58:18.787284  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.787295  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.787303  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.792015  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:18.792787  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:18.792809  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.792820  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.792826  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.796338  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:18.796833  682373 pod_ready.go:93] pod "etcd-ha-097312" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:18.796854  682373 pod_ready.go:82] duration metric: took 9.643589ms for pod "etcd-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:18.796863  682373 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:18.796938  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/etcd-ha-097312-m02
	I0923 12:58:18.796951  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.796958  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.796962  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.800096  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:18.800646  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:18.800664  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.800675  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.800680  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.803250  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:58:18.803795  682373 pod_ready.go:93] pod "etcd-ha-097312-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:18.803820  682373 pod_ready.go:82] duration metric: took 6.946045ms for pod "etcd-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:18.803842  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:18.955292  682373 request.go:632] Waited for 151.365865ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312
	I0923 12:58:18.955373  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312
	I0923 12:58:18.955378  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.955388  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.955394  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.959155  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:19.155346  682373 request.go:632] Waited for 195.422034ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:19.155457  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:19.155466  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:19.155481  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:19.155491  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:19.158847  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:19.159413  682373 pod_ready.go:93] pod "kube-apiserver-ha-097312" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:19.159433  682373 pod_ready.go:82] duration metric: took 355.582451ms for pod "kube-apiserver-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:19.159446  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:19.355524  682373 request.go:632] Waited for 195.972937ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312-m02
	I0923 12:58:19.355603  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312-m02
	I0923 12:58:19.355611  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:19.355624  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:19.355634  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:19.358947  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:19.555060  682373 request.go:632] Waited for 195.299012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:19.555156  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:19.555165  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:19.555173  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:19.555180  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:19.558664  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:19.559169  682373 pod_ready.go:93] pod "kube-apiserver-ha-097312-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:19.559189  682373 pod_ready.go:82] duration metric: took 399.735219ms for pod "kube-apiserver-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:19.559199  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:19.755252  682373 request.go:632] Waited for 195.975758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312
	I0923 12:58:19.755347  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312
	I0923 12:58:19.755367  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:19.755395  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:19.755406  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:19.759281  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:19.955410  682373 request.go:632] Waited for 195.442789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:19.955490  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:19.955495  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:19.955504  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:19.955551  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:19.960116  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:19.960952  682373 pod_ready.go:93] pod "kube-controller-manager-ha-097312" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:19.960978  682373 pod_ready.go:82] duration metric: took 401.771647ms for pod "kube-controller-manager-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:19.960989  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:20.155181  682373 request.go:632] Waited for 194.10652ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312-m02
	I0923 12:58:20.155288  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312-m02
	I0923 12:58:20.155299  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:20.155307  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:20.155311  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:20.158904  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:20.355343  682373 request.go:632] Waited for 195.400275ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:20.355420  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:20.355425  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:20.355434  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:20.355440  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:20.358631  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:20.359159  682373 pod_ready.go:93] pod "kube-controller-manager-ha-097312-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:20.359188  682373 pod_ready.go:82] duration metric: took 398.191037ms for pod "kube-controller-manager-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:20.359202  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-drj8m" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:20.555330  682373 request.go:632] Waited for 196.021107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-drj8m
	I0923 12:58:20.555406  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-drj8m
	I0923 12:58:20.555412  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:20.555420  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:20.555430  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:20.559151  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:20.755254  682373 request.go:632] Waited for 195.454293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:20.755335  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:20.755340  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:20.755347  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:20.755351  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:20.759445  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:20.760118  682373 pod_ready.go:93] pod "kube-proxy-drj8m" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:20.760139  682373 pod_ready.go:82] duration metric: took 400.929533ms for pod "kube-proxy-drj8m" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:20.760148  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z6ss5" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:20.955378  682373 request.go:632] Waited for 195.139639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z6ss5
	I0923 12:58:20.955478  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z6ss5
	I0923 12:58:20.955488  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:20.955496  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:20.955517  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:20.959839  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:21.155010  682373 request.go:632] Waited for 194.343151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:21.155079  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:21.155084  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:21.155092  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:21.155096  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:21.158450  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:21.158954  682373 pod_ready.go:93] pod "kube-proxy-z6ss5" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:21.158974  682373 pod_ready.go:82] duration metric: took 398.819585ms for pod "kube-proxy-z6ss5" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:21.158984  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:21.355051  682373 request.go:632] Waited for 195.979167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312
	I0923 12:58:21.355148  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312
	I0923 12:58:21.355153  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:21.355161  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:21.355166  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:21.359586  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:21.554981  682373 request.go:632] Waited for 194.336515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:21.555072  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:21.555080  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:21.555090  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:21.555099  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:21.558426  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:21.558962  682373 pod_ready.go:93] pod "kube-scheduler-ha-097312" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:21.558988  682373 pod_ready.go:82] duration metric: took 399.997577ms for pod "kube-scheduler-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:21.558999  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:21.755254  682373 request.go:632] Waited for 196.12462ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312-m02
	I0923 12:58:21.755345  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312-m02
	I0923 12:58:21.755351  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:21.755359  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:21.755363  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:21.759215  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:21.955895  682373 request.go:632] Waited for 196.121213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:21.955983  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:21.955989  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:21.955996  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:21.956001  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:21.960399  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:21.960900  682373 pod_ready.go:93] pod "kube-scheduler-ha-097312-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:21.960922  682373 pod_ready.go:82] duration metric: took 401.915303ms for pod "kube-scheduler-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:21.960933  682373 pod_ready.go:39] duration metric: took 3.201132427s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:58:21.960950  682373 api_server.go:52] waiting for apiserver process to appear ...
	I0923 12:58:21.961025  682373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:58:21.980626  682373 api_server.go:72] duration metric: took 21.941154667s to wait for apiserver process to appear ...
	I0923 12:58:21.980660  682373 api_server.go:88] waiting for apiserver healthz status ...
	I0923 12:58:21.980684  682373 api_server.go:253] Checking apiserver healthz at https://192.168.39.160:8443/healthz ...
	I0923 12:58:21.985481  682373 api_server.go:279] https://192.168.39.160:8443/healthz returned 200:
	ok
	I0923 12:58:21.985563  682373 round_trippers.go:463] GET https://192.168.39.160:8443/version
	I0923 12:58:21.985574  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:21.985582  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:21.985586  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:21.986808  682373 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 12:58:21.987069  682373 api_server.go:141] control plane version: v1.31.1
	I0923 12:58:21.987104  682373 api_server.go:131] duration metric: took 6.43733ms to wait for apiserver health ...
	I0923 12:58:21.987113  682373 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 12:58:22.155587  682373 request.go:632] Waited for 168.378674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:58:22.155651  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:58:22.155657  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:22.155665  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:22.155669  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:22.166855  682373 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0923 12:58:22.174103  682373 system_pods.go:59] 17 kube-system pods found
	I0923 12:58:22.174149  682373 system_pods.go:61] "coredns-7c65d6cfc9-6g9x2" [af485e47-0e78-483e-8f35-a7a4ab53f014] Running
	I0923 12:58:22.174157  682373 system_pods.go:61] "coredns-7c65d6cfc9-txcxz" [e6da5f25-f232-4649-9801-f3577210ea2e] Running
	I0923 12:58:22.174164  682373 system_pods.go:61] "etcd-ha-097312" [7f27c05d-176f-4397-8966-a2cc29556265] Running
	I0923 12:58:22.174170  682373 system_pods.go:61] "etcd-ha-097312-m02" [50d4b55f-31d3-4351-8574-506bbc4167d6] Running
	I0923 12:58:22.174176  682373 system_pods.go:61] "kindnet-hcclj" [0e57c02a-6f9f-4829-9838-6bed660540a4] Running
	I0923 12:58:22.174182  682373 system_pods.go:61] "kindnet-j8l5t" [49216705-6e85-4b98-afbd-f4228b774321] Running
	I0923 12:58:22.174188  682373 system_pods.go:61] "kube-apiserver-ha-097312" [4b8954a1-188a-4734-8e79-eace293c35e9] Running
	I0923 12:58:22.174194  682373 system_pods.go:61] "kube-apiserver-ha-097312-m02" [6022c193-400e-4641-8c4d-d24f0ce3e6ea] Running
	I0923 12:58:22.174199  682373 system_pods.go:61] "kube-controller-manager-ha-097312" [c085db05-26f3-471b-baf1-f90cbfdacf19] Running
	I0923 12:58:22.174205  682373 system_pods.go:61] "kube-controller-manager-ha-097312-m02" [4cc903b8-c0c1-4ef7-9338-44af86be9280] Running
	I0923 12:58:22.174214  682373 system_pods.go:61] "kube-proxy-drj8m" [a1c5535e-7139-441f-9065-ef7d147582d2] Running
	I0923 12:58:22.174226  682373 system_pods.go:61] "kube-proxy-z6ss5" [7bff6204-a427-48c8-83a3-448ff1328b1b] Running
	I0923 12:58:22.174233  682373 system_pods.go:61] "kube-scheduler-ha-097312" [408ec8ae-eeca-4026-9582-45e7d209f09c] Running
	I0923 12:58:22.174240  682373 system_pods.go:61] "kube-scheduler-ha-097312-m02" [71e7793e-3d21-476a-84de-6fc84631e313] Running
	I0923 12:58:22.174247  682373 system_pods.go:61] "kube-vip-ha-097312" [b26dfdf8-fa4b-4822-a88c-fe7af53be81b] Running
	I0923 12:58:22.174253  682373 system_pods.go:61] "kube-vip-ha-097312-m02" [910ae281-c533-4aa6-acb0-c1b69dddd842] Running
	I0923 12:58:22.174264  682373 system_pods.go:61] "storage-provisioner" [0bbda806-091c-4e48-982a-296bbf03abd6] Running
	I0923 12:58:22.174277  682373 system_pods.go:74] duration metric: took 187.156047ms to wait for pod list to return data ...
	I0923 12:58:22.174293  682373 default_sa.go:34] waiting for default service account to be created ...
	I0923 12:58:22.355843  682373 request.go:632] Waited for 181.449658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/default/serviceaccounts
	I0923 12:58:22.355909  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/default/serviceaccounts
	I0923 12:58:22.355914  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:22.355922  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:22.355927  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:22.360440  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:22.360699  682373 default_sa.go:45] found service account: "default"
	I0923 12:58:22.360716  682373 default_sa.go:55] duration metric: took 186.414512ms for default service account to be created ...
	I0923 12:58:22.360725  682373 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 12:58:22.555206  682373 request.go:632] Waited for 194.405433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:58:22.555295  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:58:22.555301  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:22.555308  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:22.555316  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:22.560454  682373 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:58:22.566018  682373 system_pods.go:86] 17 kube-system pods found
	I0923 12:58:22.566047  682373 system_pods.go:89] "coredns-7c65d6cfc9-6g9x2" [af485e47-0e78-483e-8f35-a7a4ab53f014] Running
	I0923 12:58:22.566053  682373 system_pods.go:89] "coredns-7c65d6cfc9-txcxz" [e6da5f25-f232-4649-9801-f3577210ea2e] Running
	I0923 12:58:22.566057  682373 system_pods.go:89] "etcd-ha-097312" [7f27c05d-176f-4397-8966-a2cc29556265] Running
	I0923 12:58:22.566061  682373 system_pods.go:89] "etcd-ha-097312-m02" [50d4b55f-31d3-4351-8574-506bbc4167d6] Running
	I0923 12:58:22.566064  682373 system_pods.go:89] "kindnet-hcclj" [0e57c02a-6f9f-4829-9838-6bed660540a4] Running
	I0923 12:58:22.566068  682373 system_pods.go:89] "kindnet-j8l5t" [49216705-6e85-4b98-afbd-f4228b774321] Running
	I0923 12:58:22.566072  682373 system_pods.go:89] "kube-apiserver-ha-097312" [4b8954a1-188a-4734-8e79-eace293c35e9] Running
	I0923 12:58:22.566075  682373 system_pods.go:89] "kube-apiserver-ha-097312-m02" [6022c193-400e-4641-8c4d-d24f0ce3e6ea] Running
	I0923 12:58:22.566079  682373 system_pods.go:89] "kube-controller-manager-ha-097312" [c085db05-26f3-471b-baf1-f90cbfdacf19] Running
	I0923 12:58:22.566083  682373 system_pods.go:89] "kube-controller-manager-ha-097312-m02" [4cc903b8-c0c1-4ef7-9338-44af86be9280] Running
	I0923 12:58:22.566086  682373 system_pods.go:89] "kube-proxy-drj8m" [a1c5535e-7139-441f-9065-ef7d147582d2] Running
	I0923 12:58:22.566090  682373 system_pods.go:89] "kube-proxy-z6ss5" [7bff6204-a427-48c8-83a3-448ff1328b1b] Running
	I0923 12:58:22.566093  682373 system_pods.go:89] "kube-scheduler-ha-097312" [408ec8ae-eeca-4026-9582-45e7d209f09c] Running
	I0923 12:58:22.566097  682373 system_pods.go:89] "kube-scheduler-ha-097312-m02" [71e7793e-3d21-476a-84de-6fc84631e313] Running
	I0923 12:58:22.566100  682373 system_pods.go:89] "kube-vip-ha-097312" [b26dfdf8-fa4b-4822-a88c-fe7af53be81b] Running
	I0923 12:58:22.566103  682373 system_pods.go:89] "kube-vip-ha-097312-m02" [910ae281-c533-4aa6-acb0-c1b69dddd842] Running
	I0923 12:58:22.566106  682373 system_pods.go:89] "storage-provisioner" [0bbda806-091c-4e48-982a-296bbf03abd6] Running
	I0923 12:58:22.566112  682373 system_pods.go:126] duration metric: took 205.38119ms to wait for k8s-apps to be running ...
	I0923 12:58:22.566121  682373 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 12:58:22.566168  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:58:22.581419  682373 system_svc.go:56] duration metric: took 15.287038ms WaitForService to wait for kubelet
	I0923 12:58:22.581451  682373 kubeadm.go:582] duration metric: took 22.541987533s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:58:22.581470  682373 node_conditions.go:102] verifying NodePressure condition ...
	I0923 12:58:22.755938  682373 request.go:632] Waited for 174.364793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes
	I0923 12:58:22.756006  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes
	I0923 12:58:22.756011  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:22.756019  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:22.756027  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:22.760246  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:22.760965  682373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:58:22.760989  682373 node_conditions.go:123] node cpu capacity is 2
	I0923 12:58:22.761000  682373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:58:22.761004  682373 node_conditions.go:123] node cpu capacity is 2
	I0923 12:58:22.761010  682373 node_conditions.go:105] duration metric: took 179.533922ms to run NodePressure ...
	I0923 12:58:22.761032  682373 start.go:241] waiting for startup goroutines ...
	I0923 12:58:22.761061  682373 start.go:255] writing updated cluster config ...
	I0923 12:58:22.763224  682373 out.go:201] 
	I0923 12:58:22.764656  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:58:22.764766  682373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 12:58:22.766263  682373 out.go:177] * Starting "ha-097312-m03" control-plane node in "ha-097312" cluster
	I0923 12:58:22.767263  682373 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 12:58:22.767288  682373 cache.go:56] Caching tarball of preloaded images
	I0923 12:58:22.767425  682373 preload.go:172] Found /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 12:58:22.767438  682373 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 12:58:22.767549  682373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 12:58:22.767768  682373 start.go:360] acquireMachinesLock for ha-097312-m03: {Name:mka98570d4b4becad22300323f1f88e64743eec3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 12:58:22.767826  682373 start.go:364] duration metric: took 34.115µs to acquireMachinesLock for "ha-097312-m03"
	I0923 12:58:22.767850  682373 start.go:93] Provisioning new machine with config: &{Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekto
r-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:58:22.767994  682373 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0923 12:58:22.769439  682373 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 12:58:22.769539  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:58:22.769588  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:58:22.784952  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35991
	I0923 12:58:22.785373  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:58:22.785878  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:58:22.785904  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:58:22.786220  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:58:22.786438  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetMachineName
	I0923 12:58:22.786607  682373 main.go:141] libmachine: (ha-097312-m03) Calling .DriverName
	I0923 12:58:22.786798  682373 start.go:159] libmachine.API.Create for "ha-097312" (driver="kvm2")
	I0923 12:58:22.786843  682373 client.go:168] LocalClient.Create starting
	I0923 12:58:22.786909  682373 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem
	I0923 12:58:22.786967  682373 main.go:141] libmachine: Decoding PEM data...
	I0923 12:58:22.786989  682373 main.go:141] libmachine: Parsing certificate...
	I0923 12:58:22.787065  682373 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem
	I0923 12:58:22.787087  682373 main.go:141] libmachine: Decoding PEM data...
	I0923 12:58:22.787098  682373 main.go:141] libmachine: Parsing certificate...
	I0923 12:58:22.787116  682373 main.go:141] libmachine: Running pre-create checks...
	I0923 12:58:22.787123  682373 main.go:141] libmachine: (ha-097312-m03) Calling .PreCreateCheck
	I0923 12:58:22.787356  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetConfigRaw
	I0923 12:58:22.787880  682373 main.go:141] libmachine: Creating machine...
	I0923 12:58:22.787894  682373 main.go:141] libmachine: (ha-097312-m03) Calling .Create
	I0923 12:58:22.788064  682373 main.go:141] libmachine: (ha-097312-m03) Creating KVM machine...
	I0923 12:58:22.789249  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found existing default KVM network
	I0923 12:58:22.789434  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found existing private KVM network mk-ha-097312
	I0923 12:58:22.789576  682373 main.go:141] libmachine: (ha-097312-m03) Setting up store path in /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03 ...
	I0923 12:58:22.789598  682373 main.go:141] libmachine: (ha-097312-m03) Building disk image from file:///home/jenkins/minikube-integration/19690-662205/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 12:58:22.789697  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:22.789573  683157 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:58:22.789778  682373 main.go:141] libmachine: (ha-097312-m03) Downloading /home/jenkins/minikube-integration/19690-662205/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19690-662205/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 12:58:23.067488  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:23.067344  683157 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa...
	I0923 12:58:23.227591  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:23.227420  683157 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/ha-097312-m03.rawdisk...
	I0923 12:58:23.227631  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Writing magic tar header
	I0923 12:58:23.227668  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Writing SSH key tar header
	I0923 12:58:23.227688  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:23.227552  683157 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03 ...
	I0923 12:58:23.227701  682373 main.go:141] libmachine: (ha-097312-m03) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03 (perms=drwx------)
	I0923 12:58:23.227722  682373 main.go:141] libmachine: (ha-097312-m03) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube/machines (perms=drwxr-xr-x)
	I0923 12:58:23.227735  682373 main.go:141] libmachine: (ha-097312-m03) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube (perms=drwxr-xr-x)
	I0923 12:58:23.227750  682373 main.go:141] libmachine: (ha-097312-m03) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205 (perms=drwxrwxr-x)
	I0923 12:58:23.227770  682373 main.go:141] libmachine: (ha-097312-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 12:58:23.227784  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03
	I0923 12:58:23.227800  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube/machines
	I0923 12:58:23.227813  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:58:23.227827  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205
	I0923 12:58:23.227839  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 12:58:23.227850  682373 main.go:141] libmachine: (ha-097312-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 12:58:23.227887  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Checking permissions on dir: /home/jenkins
	I0923 12:58:23.227917  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Checking permissions on dir: /home
	I0923 12:58:23.227930  682373 main.go:141] libmachine: (ha-097312-m03) Creating domain...
	I0923 12:58:23.227949  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Skipping /home - not owner
	I0923 12:58:23.228646  682373 main.go:141] libmachine: (ha-097312-m03) define libvirt domain using xml: 
	I0923 12:58:23.228661  682373 main.go:141] libmachine: (ha-097312-m03) <domain type='kvm'>
	I0923 12:58:23.228669  682373 main.go:141] libmachine: (ha-097312-m03)   <name>ha-097312-m03</name>
	I0923 12:58:23.228688  682373 main.go:141] libmachine: (ha-097312-m03)   <memory unit='MiB'>2200</memory>
	I0923 12:58:23.228717  682373 main.go:141] libmachine: (ha-097312-m03)   <vcpu>2</vcpu>
	I0923 12:58:23.228738  682373 main.go:141] libmachine: (ha-097312-m03)   <features>
	I0923 12:58:23.228750  682373 main.go:141] libmachine: (ha-097312-m03)     <acpi/>
	I0923 12:58:23.228767  682373 main.go:141] libmachine: (ha-097312-m03)     <apic/>
	I0923 12:58:23.228781  682373 main.go:141] libmachine: (ha-097312-m03)     <pae/>
	I0923 12:58:23.228788  682373 main.go:141] libmachine: (ha-097312-m03)     
	I0923 12:58:23.228798  682373 main.go:141] libmachine: (ha-097312-m03)   </features>
	I0923 12:58:23.228813  682373 main.go:141] libmachine: (ha-097312-m03)   <cpu mode='host-passthrough'>
	I0923 12:58:23.228824  682373 main.go:141] libmachine: (ha-097312-m03)   
	I0923 12:58:23.228832  682373 main.go:141] libmachine: (ha-097312-m03)   </cpu>
	I0923 12:58:23.228843  682373 main.go:141] libmachine: (ha-097312-m03)   <os>
	I0923 12:58:23.228853  682373 main.go:141] libmachine: (ha-097312-m03)     <type>hvm</type>
	I0923 12:58:23.228866  682373 main.go:141] libmachine: (ha-097312-m03)     <boot dev='cdrom'/>
	I0923 12:58:23.228881  682373 main.go:141] libmachine: (ha-097312-m03)     <boot dev='hd'/>
	I0923 12:58:23.228893  682373 main.go:141] libmachine: (ha-097312-m03)     <bootmenu enable='no'/>
	I0923 12:58:23.228902  682373 main.go:141] libmachine: (ha-097312-m03)   </os>
	I0923 12:58:23.228911  682373 main.go:141] libmachine: (ha-097312-m03)   <devices>
	I0923 12:58:23.228922  682373 main.go:141] libmachine: (ha-097312-m03)     <disk type='file' device='cdrom'>
	I0923 12:58:23.228960  682373 main.go:141] libmachine: (ha-097312-m03)       <source file='/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/boot2docker.iso'/>
	I0923 12:58:23.228987  682373 main.go:141] libmachine: (ha-097312-m03)       <target dev='hdc' bus='scsi'/>
	I0923 12:58:23.228998  682373 main.go:141] libmachine: (ha-097312-m03)       <readonly/>
	I0923 12:58:23.229011  682373 main.go:141] libmachine: (ha-097312-m03)     </disk>
	I0923 12:58:23.229023  682373 main.go:141] libmachine: (ha-097312-m03)     <disk type='file' device='disk'>
	I0923 12:58:23.229035  682373 main.go:141] libmachine: (ha-097312-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 12:58:23.229050  682373 main.go:141] libmachine: (ha-097312-m03)       <source file='/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/ha-097312-m03.rawdisk'/>
	I0923 12:58:23.229060  682373 main.go:141] libmachine: (ha-097312-m03)       <target dev='hda' bus='virtio'/>
	I0923 12:58:23.229070  682373 main.go:141] libmachine: (ha-097312-m03)     </disk>
	I0923 12:58:23.229081  682373 main.go:141] libmachine: (ha-097312-m03)     <interface type='network'>
	I0923 12:58:23.229090  682373 main.go:141] libmachine: (ha-097312-m03)       <source network='mk-ha-097312'/>
	I0923 12:58:23.229114  682373 main.go:141] libmachine: (ha-097312-m03)       <model type='virtio'/>
	I0923 12:58:23.229140  682373 main.go:141] libmachine: (ha-097312-m03)     </interface>
	I0923 12:58:23.229160  682373 main.go:141] libmachine: (ha-097312-m03)     <interface type='network'>
	I0923 12:58:23.229172  682373 main.go:141] libmachine: (ha-097312-m03)       <source network='default'/>
	I0923 12:58:23.229186  682373 main.go:141] libmachine: (ha-097312-m03)       <model type='virtio'/>
	I0923 12:58:23.229197  682373 main.go:141] libmachine: (ha-097312-m03)     </interface>
	I0923 12:58:23.229203  682373 main.go:141] libmachine: (ha-097312-m03)     <serial type='pty'>
	I0923 12:58:23.229214  682373 main.go:141] libmachine: (ha-097312-m03)       <target port='0'/>
	I0923 12:58:23.229223  682373 main.go:141] libmachine: (ha-097312-m03)     </serial>
	I0923 12:58:23.229232  682373 main.go:141] libmachine: (ha-097312-m03)     <console type='pty'>
	I0923 12:58:23.229242  682373 main.go:141] libmachine: (ha-097312-m03)       <target type='serial' port='0'/>
	I0923 12:58:23.229252  682373 main.go:141] libmachine: (ha-097312-m03)     </console>
	I0923 12:58:23.229264  682373 main.go:141] libmachine: (ha-097312-m03)     <rng model='virtio'>
	I0923 12:58:23.229283  682373 main.go:141] libmachine: (ha-097312-m03)       <backend model='random'>/dev/random</backend>
	I0923 12:58:23.229301  682373 main.go:141] libmachine: (ha-097312-m03)     </rng>
	I0923 12:58:23.229309  682373 main.go:141] libmachine: (ha-097312-m03)     
	I0923 12:58:23.229315  682373 main.go:141] libmachine: (ha-097312-m03)     
	I0923 12:58:23.229321  682373 main.go:141] libmachine: (ha-097312-m03)   </devices>
	I0923 12:58:23.229324  682373 main.go:141] libmachine: (ha-097312-m03) </domain>
	I0923 12:58:23.229331  682373 main.go:141] libmachine: (ha-097312-m03) 
	I0923 12:58:23.236443  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:ba:f1:b5 in network default
	I0923 12:58:23.237006  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:23.237021  682373 main.go:141] libmachine: (ha-097312-m03) Ensuring networks are active...
	I0923 12:58:23.237857  682373 main.go:141] libmachine: (ha-097312-m03) Ensuring network default is active
	I0923 12:58:23.238229  682373 main.go:141] libmachine: (ha-097312-m03) Ensuring network mk-ha-097312 is active
	I0923 12:58:23.238611  682373 main.go:141] libmachine: (ha-097312-m03) Getting domain xml...
	I0923 12:58:23.239268  682373 main.go:141] libmachine: (ha-097312-m03) Creating domain...
	I0923 12:58:24.490717  682373 main.go:141] libmachine: (ha-097312-m03) Waiting to get IP...
	I0923 12:58:24.491571  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:24.492070  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:24.492095  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:24.492045  683157 retry.go:31] will retry after 248.750792ms: waiting for machine to come up
	I0923 12:58:24.742884  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:24.743526  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:24.743556  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:24.743474  683157 retry.go:31] will retry after 255.093938ms: waiting for machine to come up
	I0923 12:58:24.999946  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:25.000409  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:25.000437  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:25.000354  683157 retry.go:31] will retry after 366.076555ms: waiting for machine to come up
	I0923 12:58:25.367854  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:25.368400  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:25.368423  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:25.368345  683157 retry.go:31] will retry after 602.474157ms: waiting for machine to come up
	I0923 12:58:25.972258  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:25.972737  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:25.972759  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:25.972695  683157 retry.go:31] will retry after 694.585684ms: waiting for machine to come up
	I0923 12:58:26.668534  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:26.668902  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:26.668929  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:26.668869  683157 retry.go:31] will retry after 679.770142ms: waiting for machine to come up
	I0923 12:58:27.350837  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:27.351322  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:27.351348  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:27.351244  683157 retry.go:31] will retry after 724.740855ms: waiting for machine to come up
	I0923 12:58:28.077164  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:28.077637  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:28.077666  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:28.077575  683157 retry.go:31] will retry after 928.712628ms: waiting for machine to come up
	I0923 12:58:29.008154  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:29.008550  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:29.008579  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:29.008504  683157 retry.go:31] will retry after 1.450407892s: waiting for machine to come up
	I0923 12:58:30.461271  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:30.461634  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:30.461657  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:30.461609  683157 retry.go:31] will retry after 1.972612983s: waiting for machine to come up
	I0923 12:58:32.435439  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:32.435994  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:32.436026  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:32.435936  683157 retry.go:31] will retry after 2.428412852s: waiting for machine to come up
	I0923 12:58:34.866973  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:34.867442  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:34.867469  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:34.867396  683157 retry.go:31] will retry after 3.321760424s: waiting for machine to come up
	I0923 12:58:38.190761  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:38.191232  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:38.191259  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:38.191169  683157 retry.go:31] will retry after 3.240294118s: waiting for machine to come up
	I0923 12:58:41.435372  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:41.435812  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:41.435833  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:41.435772  683157 retry.go:31] will retry after 4.450333931s: waiting for machine to come up
	I0923 12:58:45.888567  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:45.889089  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has current primary IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:45.889129  682373 main.go:141] libmachine: (ha-097312-m03) Found IP for machine: 192.168.39.174
	I0923 12:58:45.889152  682373 main.go:141] libmachine: (ha-097312-m03) Reserving static IP address...
	I0923 12:58:45.889591  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find host DHCP lease matching {name: "ha-097312-m03", mac: "52:54:00:39:fc:65", ip: "192.168.39.174"} in network mk-ha-097312
	I0923 12:58:45.977147  682373 main.go:141] libmachine: (ha-097312-m03) Reserved static IP address: 192.168.39.174
	I0923 12:58:45.977177  682373 main.go:141] libmachine: (ha-097312-m03) Waiting for SSH to be available...
	I0923 12:58:45.977199  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Getting to WaitForSSH function...
	I0923 12:58:45.980053  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:45.980585  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312
	I0923 12:58:45.980626  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find defined IP address of network mk-ha-097312 interface with MAC address 52:54:00:39:fc:65
	I0923 12:58:45.980767  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Using SSH client type: external
	I0923 12:58:45.980803  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa (-rw-------)
	I0923 12:58:45.980837  682373 main.go:141] libmachine: (ha-097312-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 12:58:45.980856  682373 main.go:141] libmachine: (ha-097312-m03) DBG | About to run SSH command:
	I0923 12:58:45.980901  682373 main.go:141] libmachine: (ha-097312-m03) DBG | exit 0
	I0923 12:58:45.984924  682373 main.go:141] libmachine: (ha-097312-m03) DBG | SSH cmd err, output: exit status 255: 
	I0923 12:58:45.984953  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0923 12:58:45.984969  682373 main.go:141] libmachine: (ha-097312-m03) DBG | command : exit 0
	I0923 12:58:45.984980  682373 main.go:141] libmachine: (ha-097312-m03) DBG | err     : exit status 255
	I0923 12:58:45.984992  682373 main.go:141] libmachine: (ha-097312-m03) DBG | output  : 
	I0923 12:58:48.985305  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Getting to WaitForSSH function...
	I0923 12:58:48.988493  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:48.989086  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:48.989132  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:48.989359  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Using SSH client type: external
	I0923 12:58:48.989374  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa (-rw-------)
	I0923 12:58:48.989402  682373 main.go:141] libmachine: (ha-097312-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 12:58:48.989422  682373 main.go:141] libmachine: (ha-097312-m03) DBG | About to run SSH command:
	I0923 12:58:48.989477  682373 main.go:141] libmachine: (ha-097312-m03) DBG | exit 0
	I0923 12:58:49.118512  682373 main.go:141] libmachine: (ha-097312-m03) DBG | SSH cmd err, output: <nil>: 
	I0923 12:58:49.118822  682373 main.go:141] libmachine: (ha-097312-m03) KVM machine creation complete!
	I0923 12:58:49.119172  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetConfigRaw
	I0923 12:58:49.119782  682373 main.go:141] libmachine: (ha-097312-m03) Calling .DriverName
	I0923 12:58:49.119996  682373 main.go:141] libmachine: (ha-097312-m03) Calling .DriverName
	I0923 12:58:49.120225  682373 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 12:58:49.120260  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetState
	I0923 12:58:49.121499  682373 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 12:58:49.121514  682373 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 12:58:49.121519  682373 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 12:58:49.121524  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:49.124296  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.124870  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:49.124900  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.125084  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:49.125266  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:49.125423  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:49.125561  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:49.125760  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:58:49.126112  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0923 12:58:49.126128  682373 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 12:58:49.237975  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:58:49.238009  682373 main.go:141] libmachine: Detecting the provisioner...
	I0923 12:58:49.238020  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:49.241019  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.241453  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:49.241483  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.241651  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:49.241948  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:49.242157  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:49.242344  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:49.242559  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:58:49.242800  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0923 12:58:49.242816  682373 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 12:58:49.358902  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 12:58:49.358998  682373 main.go:141] libmachine: found compatible host: buildroot
	I0923 12:58:49.359008  682373 main.go:141] libmachine: Provisioning with buildroot...
	I0923 12:58:49.359016  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetMachineName
	I0923 12:58:49.359321  682373 buildroot.go:166] provisioning hostname "ha-097312-m03"
	I0923 12:58:49.359351  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetMachineName
	I0923 12:58:49.359578  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:49.362575  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.363012  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:49.363043  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.363307  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:49.363499  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:49.363671  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:49.363837  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:49.363993  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:58:49.364183  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0923 12:58:49.364200  682373 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-097312-m03 && echo "ha-097312-m03" | sudo tee /etc/hostname
	I0923 12:58:49.489492  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-097312-m03
	
	I0923 12:58:49.489526  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:49.492826  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.493233  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:49.493269  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.493628  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:49.493912  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:49.494119  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:49.494303  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:49.494519  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:58:49.494751  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0923 12:58:49.494771  682373 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-097312-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-097312-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-097312-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:58:49.623370  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:58:49.623402  682373 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19690-662205/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-662205/.minikube}
	I0923 12:58:49.623425  682373 buildroot.go:174] setting up certificates
	I0923 12:58:49.623436  682373 provision.go:84] configureAuth start
	I0923 12:58:49.623450  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetMachineName
	I0923 12:58:49.623804  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetIP
	I0923 12:58:49.626789  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.627251  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:49.627282  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.627473  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:49.630844  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.631265  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:49.631296  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.631526  682373 provision.go:143] copyHostCerts
	I0923 12:58:49.631561  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 12:58:49.631598  682373 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem, removing ...
	I0923 12:58:49.631607  682373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 12:58:49.631691  682373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem (1082 bytes)
	I0923 12:58:49.631792  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 12:58:49.631821  682373 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem, removing ...
	I0923 12:58:49.631827  682373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 12:58:49.631868  682373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem (1123 bytes)
	I0923 12:58:49.631937  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 12:58:49.631962  682373 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem, removing ...
	I0923 12:58:49.631969  682373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 12:58:49.632010  682373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem (1675 bytes)
	I0923 12:58:49.632096  682373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem org=jenkins.ha-097312-m03 san=[127.0.0.1 192.168.39.174 ha-097312-m03 localhost minikube]
	I0923 12:58:49.828110  682373 provision.go:177] copyRemoteCerts
	I0923 12:58:49.828198  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:58:49.828227  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:49.830911  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.831302  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:49.831336  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.831594  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:49.831831  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:49.832077  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:49.832238  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa Username:docker}
	I0923 12:58:49.921694  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 12:58:49.921777  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 12:58:49.946275  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 12:58:49.946377  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 12:58:49.972209  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 12:58:49.972329  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 12:58:49.998142  682373 provision.go:87] duration metric: took 374.691465ms to configureAuth
	I0923 12:58:49.998176  682373 buildroot.go:189] setting minikube options for container-runtime
	I0923 12:58:49.998394  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:58:49.998468  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:50.001457  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.001907  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:50.002003  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.002101  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:50.002332  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:50.002519  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:50.002830  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:50.003058  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:58:50.003274  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0923 12:58:50.003290  682373 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 12:58:50.239197  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 12:58:50.239229  682373 main.go:141] libmachine: Checking connection to Docker...
	I0923 12:58:50.239238  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetURL
	I0923 12:58:50.240570  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Using libvirt version 6000000
	I0923 12:58:50.243373  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.243723  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:50.243750  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.243998  682373 main.go:141] libmachine: Docker is up and running!
	I0923 12:58:50.244012  682373 main.go:141] libmachine: Reticulating splines...
	I0923 12:58:50.244021  682373 client.go:171] duration metric: took 27.457166675s to LocalClient.Create
	I0923 12:58:50.244048  682373 start.go:167] duration metric: took 27.457253634s to libmachine.API.Create "ha-097312"
	I0923 12:58:50.244058  682373 start.go:293] postStartSetup for "ha-097312-m03" (driver="kvm2")
	I0923 12:58:50.244067  682373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:58:50.244084  682373 main.go:141] libmachine: (ha-097312-m03) Calling .DriverName
	I0923 12:58:50.244341  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:58:50.244373  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:50.247177  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.247500  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:50.247521  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.247754  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:50.247951  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:50.248097  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:50.248197  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa Username:docker}
	I0923 12:58:50.333384  682373 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:58:50.338046  682373 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 12:58:50.338080  682373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/addons for local assets ...
	I0923 12:58:50.338170  682373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/files for local assets ...
	I0923 12:58:50.338267  682373 filesync.go:149] local asset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> 6694472.pem in /etc/ssl/certs
	I0923 12:58:50.338282  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> /etc/ssl/certs/6694472.pem
	I0923 12:58:50.338392  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 12:58:50.348354  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 12:58:50.372707  682373 start.go:296] duration metric: took 128.633991ms for postStartSetup
	I0923 12:58:50.372762  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetConfigRaw
	I0923 12:58:50.373426  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetIP
	I0923 12:58:50.376697  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.377173  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:50.377211  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.377593  682373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 12:58:50.377873  682373 start.go:128] duration metric: took 27.609858816s to createHost
	I0923 12:58:50.377907  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:50.380411  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.380907  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:50.380940  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.381160  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:50.381382  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:50.381590  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:50.381776  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:50.381976  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:58:50.382153  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0923 12:58:50.382163  682373 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 12:58:50.503140  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727096330.482204055
	
	I0923 12:58:50.503171  682373 fix.go:216] guest clock: 1727096330.482204055
	I0923 12:58:50.503182  682373 fix.go:229] Guest: 2024-09-23 12:58:50.482204055 +0000 UTC Remote: 2024-09-23 12:58:50.377890431 +0000 UTC m=+148.586385508 (delta=104.313624ms)
	I0923 12:58:50.503201  682373 fix.go:200] guest clock delta is within tolerance: 104.313624ms
	I0923 12:58:50.503207  682373 start.go:83] releasing machines lock for "ha-097312-m03", held for 27.735369252s
	I0923 12:58:50.503226  682373 main.go:141] libmachine: (ha-097312-m03) Calling .DriverName
	I0923 12:58:50.503498  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetIP
	I0923 12:58:50.506212  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.506688  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:50.506716  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.509222  682373 out.go:177] * Found network options:
	I0923 12:58:50.511101  682373 out.go:177]   - NO_PROXY=192.168.39.160,192.168.39.214
	W0923 12:58:50.512787  682373 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 12:58:50.512820  682373 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 12:58:50.512843  682373 main.go:141] libmachine: (ha-097312-m03) Calling .DriverName
	I0923 12:58:50.513731  682373 main.go:141] libmachine: (ha-097312-m03) Calling .DriverName
	I0923 12:58:50.513996  682373 main.go:141] libmachine: (ha-097312-m03) Calling .DriverName
	I0923 12:58:50.514102  682373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 12:58:50.514157  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	W0923 12:58:50.514279  682373 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 12:58:50.514318  682373 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 12:58:50.514393  682373 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 12:58:50.514415  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:50.517470  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.517502  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.517875  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:50.517907  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.517943  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:50.517962  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.518097  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:50.518178  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:50.518290  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:50.518373  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:50.518440  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:50.518566  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa Username:docker}
	I0923 12:58:50.518640  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:50.518802  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa Username:docker}
	I0923 12:58:50.765065  682373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 12:58:50.770910  682373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 12:58:50.770996  682373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 12:58:50.788872  682373 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 12:58:50.788920  682373 start.go:495] detecting cgroup driver to use...
	I0923 12:58:50.790888  682373 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 12:58:50.809431  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:58:50.825038  682373 docker.go:217] disabling cri-docker service (if available) ...
	I0923 12:58:50.825112  682373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 12:58:50.839523  682373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 12:58:50.854328  682373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 12:58:50.973330  682373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 12:58:51.114738  682373 docker.go:233] disabling docker service ...
	I0923 12:58:51.114816  682373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 12:58:51.129713  682373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 12:58:51.142863  682373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 12:58:51.295068  682373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 12:58:51.429699  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 12:58:51.445916  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:58:51.465380  682373 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 12:58:51.465444  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:58:51.476939  682373 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 12:58:51.477023  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:58:51.489669  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:58:51.501133  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:58:51.512757  682373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 12:58:51.524127  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:58:51.535054  682373 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:58:51.553239  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:58:51.565038  682373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 12:58:51.575598  682373 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 12:58:51.575670  682373 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 12:58:51.590718  682373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 12:58:51.601615  682373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:58:51.733836  682373 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 12:58:51.836194  682373 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 12:58:51.836276  682373 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 12:58:51.841212  682373 start.go:563] Will wait 60s for crictl version
	I0923 12:58:51.841301  682373 ssh_runner.go:195] Run: which crictl
	I0923 12:58:51.845296  682373 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 12:58:51.885994  682373 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 12:58:51.886074  682373 ssh_runner.go:195] Run: crio --version
	I0923 12:58:51.916461  682373 ssh_runner.go:195] Run: crio --version
	I0923 12:58:51.949216  682373 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 12:58:51.950816  682373 out.go:177]   - env NO_PROXY=192.168.39.160
	I0923 12:58:51.952396  682373 out.go:177]   - env NO_PROXY=192.168.39.160,192.168.39.214
	I0923 12:58:51.953858  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetIP
	I0923 12:58:51.957017  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:51.957485  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:51.957528  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:51.957807  682373 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 12:58:51.962319  682373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:58:51.975129  682373 mustload.go:65] Loading cluster: ha-097312
	I0923 12:58:51.975422  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:58:51.975727  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:58:51.975781  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:58:51.992675  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37443
	I0923 12:58:51.993145  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:58:51.993728  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:58:51.993763  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:58:51.994191  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:58:51.994434  682373 main.go:141] libmachine: (ha-097312) Calling .GetState
	I0923 12:58:51.996127  682373 host.go:66] Checking if "ha-097312" exists ...
	I0923 12:58:51.996593  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:58:51.996642  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:58:52.013141  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39117
	I0923 12:58:52.013710  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:58:52.014272  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:58:52.014297  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:58:52.014717  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:58:52.014958  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:58:52.015174  682373 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312 for IP: 192.168.39.174
	I0923 12:58:52.015189  682373 certs.go:194] generating shared ca certs ...
	I0923 12:58:52.015209  682373 certs.go:226] acquiring lock for ca certs: {Name:mk5f47b34d40554f07f6507fea971236e4735d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:58:52.015353  682373 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key
	I0923 12:58:52.015390  682373 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key
	I0923 12:58:52.015406  682373 certs.go:256] generating profile certs ...
	I0923 12:58:52.015485  682373 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.key
	I0923 12:58:52.015512  682373 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.480c46ec
	I0923 12:58:52.015531  682373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.480c46ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.160 192.168.39.214 192.168.39.174 192.168.39.254]
	I0923 12:58:52.141850  682373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.480c46ec ...
	I0923 12:58:52.141895  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.480c46ec: {Name:mkad80d48481e741ac2c369b88d81a886d1377dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:58:52.142113  682373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.480c46ec ...
	I0923 12:58:52.142128  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.480c46ec: {Name:mkc4802b23ce391f6bffaeddf1263168cc10992d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:58:52.142267  682373 certs.go:381] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.480c46ec -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt
	I0923 12:58:52.142420  682373 certs.go:385] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.480c46ec -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key
	I0923 12:58:52.142572  682373 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key
	I0923 12:58:52.142590  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 12:58:52.142609  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 12:58:52.142626  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 12:58:52.142641  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 12:58:52.142657  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 12:58:52.142672  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 12:58:52.142686  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 12:58:52.162055  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 12:58:52.162175  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem (1338 bytes)
	W0923 12:58:52.162222  682373 certs.go:480] ignoring /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447_empty.pem, impossibly tiny 0 bytes
	I0923 12:58:52.162262  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 12:58:52.162301  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem (1082 bytes)
	I0923 12:58:52.162335  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem (1123 bytes)
	I0923 12:58:52.162366  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem (1675 bytes)
	I0923 12:58:52.162425  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 12:58:52.162463  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:58:52.162486  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem -> /usr/share/ca-certificates/669447.pem
	I0923 12:58:52.162507  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> /usr/share/ca-certificates/6694472.pem
	I0923 12:58:52.162554  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:58:52.165353  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:58:52.165846  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:58:52.165879  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:58:52.166095  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:58:52.166330  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:58:52.166495  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:58:52.166657  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:58:52.246349  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0923 12:58:52.251941  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0923 12:58:52.264760  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0923 12:58:52.269374  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0923 12:58:52.280997  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0923 12:58:52.286014  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0923 12:58:52.298212  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0923 12:58:52.302755  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0923 12:58:52.314763  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0923 12:58:52.319431  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0923 12:58:52.330709  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0923 12:58:52.335071  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1671 bytes)
	I0923 12:58:52.347748  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 12:58:52.374394  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 12:58:52.402200  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 12:58:52.428792  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 12:58:52.453080  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0923 12:58:52.477297  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 12:58:52.502367  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 12:58:52.527508  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 12:58:52.552924  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 12:58:52.577615  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem --> /usr/share/ca-certificates/669447.pem (1338 bytes)
	I0923 12:58:52.602992  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /usr/share/ca-certificates/6694472.pem (1708 bytes)
	I0923 12:58:52.628751  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0923 12:58:52.648794  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0923 12:58:52.665863  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0923 12:58:52.683590  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0923 12:58:52.703077  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0923 12:58:52.721135  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1671 bytes)
	I0923 12:58:52.738608  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0923 12:58:52.756580  682373 ssh_runner.go:195] Run: openssl version
	I0923 12:58:52.762277  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 12:58:52.773072  682373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:58:52.778133  682373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 12:28 /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:58:52.778215  682373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:58:52.784053  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 12:58:52.795445  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669447.pem && ln -fs /usr/share/ca-certificates/669447.pem /etc/ssl/certs/669447.pem"
	I0923 12:58:52.806223  682373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669447.pem
	I0923 12:58:52.811080  682373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 12:47 /usr/share/ca-certificates/669447.pem
	I0923 12:58:52.811155  682373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669447.pem
	I0923 12:58:52.817004  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/669447.pem /etc/ssl/certs/51391683.0"
	I0923 12:58:52.828392  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6694472.pem && ln -fs /usr/share/ca-certificates/6694472.pem /etc/ssl/certs/6694472.pem"
	I0923 12:58:52.839455  682373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6694472.pem
	I0923 12:58:52.844434  682373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 12:47 /usr/share/ca-certificates/6694472.pem
	I0923 12:58:52.844501  682373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6694472.pem
	I0923 12:58:52.850419  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6694472.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 12:58:52.861972  682373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 12:58:52.866305  682373 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 12:58:52.866361  682373 kubeadm.go:934] updating node {m03 192.168.39.174 8443 v1.31.1 crio true true} ...
	I0923 12:58:52.866458  682373 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-097312-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 12:58:52.866484  682373 kube-vip.go:115] generating kube-vip config ...
	I0923 12:58:52.866520  682373 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 12:58:52.883666  682373 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 12:58:52.883745  682373 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0923 12:58:52.883809  682373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 12:58:52.895283  682373 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0923 12:58:52.895366  682373 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0923 12:58:52.905663  682373 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0923 12:58:52.905685  682373 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0923 12:58:52.905697  682373 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0923 12:58:52.905721  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 12:58:52.905750  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:58:52.905775  682373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 12:58:52.905694  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 12:58:52.905887  682373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 12:58:52.923501  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 12:58:52.923608  682373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 12:58:52.923612  682373 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0923 12:58:52.923649  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0923 12:58:52.923698  682373 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0923 12:58:52.923733  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0923 12:58:52.956744  682373 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0923 12:58:52.956812  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0923 12:58:54.045786  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0923 12:58:54.057369  682373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0923 12:58:54.076949  682373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 12:58:54.094827  682373 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0923 12:58:54.111645  682373 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0923 12:58:54.115795  682373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:58:54.129074  682373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:58:54.273605  682373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:58:54.295098  682373 host.go:66] Checking if "ha-097312" exists ...
	I0923 12:58:54.295704  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:58:54.295775  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:58:54.312297  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32969
	I0923 12:58:54.312791  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:58:54.313333  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:58:54.313355  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:58:54.313727  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:58:54.314023  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:58:54.314202  682373 start.go:317] joinCluster: &{Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:58:54.314373  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0923 12:58:54.314400  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:58:54.318048  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:58:54.318537  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:58:54.318569  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:58:54.318697  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:58:54.319009  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:58:54.319229  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:58:54.319353  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:58:54.524084  682373 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:58:54.524132  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ll3mfm.tdumzjzob0cezji3 --discovery-token-ca-cert-hash sha256:3fc29dc81bde6bbaef9ddbc91342eaa216189e2d814cc53e215aada75bebb1ff --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-097312-m03 --control-plane --apiserver-advertise-address=192.168.39.174 --apiserver-bind-port=8443"
	I0923 12:59:17.735394  682373 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ll3mfm.tdumzjzob0cezji3 --discovery-token-ca-cert-hash sha256:3fc29dc81bde6bbaef9ddbc91342eaa216189e2d814cc53e215aada75bebb1ff --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-097312-m03 --control-plane --apiserver-advertise-address=192.168.39.174 --apiserver-bind-port=8443": (23.211225253s)
	I0923 12:59:17.735437  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0923 12:59:18.305608  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-097312-m03 minikube.k8s.io/updated_at=2024_09_23T12_59_18_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=ha-097312 minikube.k8s.io/primary=false
	I0923 12:59:18.439539  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-097312-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0923 12:59:18.578555  682373 start.go:319] duration metric: took 24.264347271s to joinCluster
	I0923 12:59:18.578645  682373 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:59:18.578956  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:59:18.580466  682373 out.go:177] * Verifying Kubernetes components...
	I0923 12:59:18.581761  682373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:59:18.828388  682373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:59:18.856001  682373 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 12:59:18.856284  682373 kapi.go:59] client config for ha-097312: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.crt", KeyFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.key", CAFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0923 12:59:18.856351  682373 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.160:8443
	I0923 12:59:18.856639  682373 node_ready.go:35] waiting up to 6m0s for node "ha-097312-m03" to be "Ready" ...
	I0923 12:59:18.856738  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:18.856749  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:18.856757  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:18.856766  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:18.860204  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:19.357957  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:19.357992  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:19.358007  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:19.358015  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:19.361736  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:19.857898  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:19.857930  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:19.857938  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:19.857944  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:19.862012  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:20.356893  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:20.356921  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:20.356930  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:20.356934  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:20.363054  682373 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:59:20.857559  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:20.857592  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:20.857605  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:20.857610  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:20.861005  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:20.862362  682373 node_ready.go:53] node "ha-097312-m03" has status "Ready":"False"
	I0923 12:59:21.357690  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:21.357715  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:21.357724  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:21.357728  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:21.361111  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:21.857622  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:21.857650  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:21.857662  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:21.857666  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:21.861308  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:22.357805  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:22.357838  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:22.357852  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:22.357857  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:22.362010  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:22.856839  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:22.856862  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:22.856870  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:22.856876  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:22.860508  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:23.356920  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:23.356945  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:23.356954  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:23.356958  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:23.361117  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:23.361903  682373 node_ready.go:53] node "ha-097312-m03" has status "Ready":"False"
	I0923 12:59:23.857041  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:23.857068  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:23.857080  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:23.857085  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:23.860533  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:24.357315  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:24.357339  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:24.357347  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:24.357351  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:24.361517  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:24.857855  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:24.857884  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:24.857895  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:24.857900  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:24.861499  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:25.357580  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:25.357619  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:25.357634  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:25.357642  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:25.361466  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:25.362062  682373 node_ready.go:53] node "ha-097312-m03" has status "Ready":"False"
	I0923 12:59:25.856889  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:25.856972  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:25.856988  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:25.856995  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:25.864725  682373 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:59:26.357753  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:26.357775  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:26.357783  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:26.357788  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:26.361700  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:26.857569  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:26.857596  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:26.857606  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:26.857610  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:26.861224  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:27.357961  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:27.357993  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:27.358004  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:27.358010  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:27.361578  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:27.362220  682373 node_ready.go:53] node "ha-097312-m03" has status "Ready":"False"
	I0923 12:59:27.857445  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:27.857476  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:27.857488  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:27.857492  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:27.860961  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:28.356947  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:28.356973  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:28.356982  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:28.356986  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:28.360616  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:28.857670  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:28.857696  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:28.857705  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:28.857709  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:28.861424  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:29.357678  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:29.357701  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:29.357710  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:29.357715  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:29.361197  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:29.857149  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:29.857176  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:29.857184  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:29.857190  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:29.861121  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:29.862064  682373 node_ready.go:53] node "ha-097312-m03" has status "Ready":"False"
	I0923 12:59:30.357260  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:30.357288  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:30.357300  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:30.357308  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:30.360825  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:30.857554  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:30.857588  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:30.857601  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:30.857607  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:30.862056  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:31.357693  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:31.357719  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:31.357729  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:31.357745  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:31.361364  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:31.857735  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:31.857763  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:31.857772  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:31.857777  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:31.861563  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:31.862191  682373 node_ready.go:53] node "ha-097312-m03" has status "Ready":"False"
	I0923 12:59:32.357163  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:32.357191  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:32.357201  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:32.357207  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:32.360747  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:32.857730  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:32.857757  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:32.857766  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:32.857770  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:32.861363  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:33.357472  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:33.357507  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:33.357516  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:33.357521  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:33.361140  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:33.857033  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:33.857060  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:33.857069  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:33.857073  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:33.860438  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:34.357801  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:34.357841  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:34.357852  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:34.357857  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:34.361712  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:34.362366  682373 node_ready.go:53] node "ha-097312-m03" has status "Ready":"False"
	I0923 12:59:34.857887  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:34.857914  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:34.857924  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:34.857929  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:34.861889  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:35.357641  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:35.357673  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:35.357745  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:35.357754  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:35.362328  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:35.856847  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:35.856871  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:35.856879  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:35.856884  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:35.860452  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:36.357570  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:36.357596  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.357604  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.357608  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.360898  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:36.361411  682373 node_ready.go:49] node "ha-097312-m03" has status "Ready":"True"
	I0923 12:59:36.361434  682373 node_ready.go:38] duration metric: took 17.504775714s for node "ha-097312-m03" to be "Ready" ...
	I0923 12:59:36.361446  682373 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:59:36.361531  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:59:36.361549  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.361557  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.361564  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.367567  682373 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:59:36.374612  682373 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6g9x2" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.374726  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6g9x2
	I0923 12:59:36.374738  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.374750  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.374756  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.377869  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:36.378692  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:36.378712  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.378724  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.378729  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.381742  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:59:36.382472  682373 pod_ready.go:93] pod "coredns-7c65d6cfc9-6g9x2" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:36.382491  682373 pod_ready.go:82] duration metric: took 7.850172ms for pod "coredns-7c65d6cfc9-6g9x2" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.382500  682373 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-txcxz" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.382562  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-txcxz
	I0923 12:59:36.382569  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.382577  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.382582  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.385403  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:59:36.386115  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:36.386131  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.386138  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.386142  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.388676  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:59:36.389107  682373 pod_ready.go:93] pod "coredns-7c65d6cfc9-txcxz" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:36.389124  682373 pod_ready.go:82] duration metric: took 6.617983ms for pod "coredns-7c65d6cfc9-txcxz" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.389133  682373 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.389188  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/etcd-ha-097312
	I0923 12:59:36.389195  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.389202  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.389208  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.391701  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:59:36.392175  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:36.392190  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.392198  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.392201  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.394837  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:59:36.395206  682373 pod_ready.go:93] pod "etcd-ha-097312" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:36.395227  682373 pod_ready.go:82] duration metric: took 6.08706ms for pod "etcd-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.395247  682373 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.395320  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/etcd-ha-097312-m02
	I0923 12:59:36.395330  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.395337  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.395340  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.398083  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:59:36.398586  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:59:36.398601  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.398608  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.398611  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.401154  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:59:36.401531  682373 pod_ready.go:93] pod "etcd-ha-097312-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:36.401548  682373 pod_ready.go:82] duration metric: took 6.293178ms for pod "etcd-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.401558  682373 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-097312-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.557912  682373 request.go:632] Waited for 156.279648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/etcd-ha-097312-m03
	I0923 12:59:36.558018  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/etcd-ha-097312-m03
	I0923 12:59:36.558029  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.558039  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.558047  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.561558  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:36.757644  682373 request.go:632] Waited for 194.999965ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:36.757715  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:36.757723  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.757735  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.757740  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.761054  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:36.761940  682373 pod_ready.go:93] pod "etcd-ha-097312-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:36.761961  682373 pod_ready.go:82] duration metric: took 360.394832ms for pod "etcd-ha-097312-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.761980  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.958288  682373 request.go:632] Waited for 196.158494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312
	I0923 12:59:36.958372  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312
	I0923 12:59:36.958380  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.958392  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.958398  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.962196  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:37.157878  682373 request.go:632] Waited for 194.88858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:37.157969  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:37.157982  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:37.157994  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:37.158002  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:37.161325  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:37.162218  682373 pod_ready.go:93] pod "kube-apiserver-ha-097312" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:37.162262  682373 pod_ready.go:82] duration metric: took 400.255775ms for pod "kube-apiserver-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:37.162271  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:37.358381  682373 request.go:632] Waited for 196.017645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312-m02
	I0923 12:59:37.358481  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312-m02
	I0923 12:59:37.358490  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:37.358512  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:37.358538  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:37.362068  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:37.558164  682373 request.go:632] Waited for 195.3848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:59:37.558235  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:59:37.558245  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:37.558256  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:37.558264  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:37.563780  682373 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:59:37.564272  682373 pod_ready.go:93] pod "kube-apiserver-ha-097312-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:37.564295  682373 pod_ready.go:82] duration metric: took 402.016943ms for pod "kube-apiserver-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:37.564305  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-097312-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:37.757786  682373 request.go:632] Waited for 193.39104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312-m03
	I0923 12:59:37.757874  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312-m03
	I0923 12:59:37.757881  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:37.757890  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:37.757897  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:37.762281  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:37.958642  682373 request.go:632] Waited for 195.351711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:37.958724  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:37.958731  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:37.958741  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:37.958751  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:37.963464  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:37.964071  682373 pod_ready.go:93] pod "kube-apiserver-ha-097312-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:37.964093  682373 pod_ready.go:82] duration metric: took 399.781684ms for pod "kube-apiserver-ha-097312-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:37.964104  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:38.158303  682373 request.go:632] Waited for 194.104315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312
	I0923 12:59:38.158371  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312
	I0923 12:59:38.158377  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:38.158385  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:38.158391  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:38.161516  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:38.358608  682373 request.go:632] Waited for 196.37901ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:38.358678  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:38.358683  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:38.358693  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:38.358707  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:38.362309  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:38.362758  682373 pod_ready.go:93] pod "kube-controller-manager-ha-097312" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:38.362779  682373 pod_ready.go:82] duration metric: took 398.667788ms for pod "kube-controller-manager-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:38.362790  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:38.557916  682373 request.go:632] Waited for 195.037752ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312-m02
	I0923 12:59:38.558039  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312-m02
	I0923 12:59:38.558049  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:38.558057  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:38.558064  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:38.563352  682373 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:59:38.758557  682373 request.go:632] Waited for 194.402691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:59:38.758625  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:59:38.758630  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:38.758637  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:38.758647  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:38.763501  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:38.764092  682373 pod_ready.go:93] pod "kube-controller-manager-ha-097312-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:38.764116  682373 pod_ready.go:82] duration metric: took 401.316143ms for pod "kube-controller-manager-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:38.764127  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-097312-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:38.958205  682373 request.go:632] Waited for 193.95149ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312-m03
	I0923 12:59:38.958318  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312-m03
	I0923 12:59:38.958330  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:38.958341  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:38.958349  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:38.962605  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:39.158615  682373 request.go:632] Waited for 195.29247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:39.158699  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:39.158709  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:39.158718  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:39.158721  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:39.162027  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:39.162535  682373 pod_ready.go:93] pod "kube-controller-manager-ha-097312-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:39.162561  682373 pod_ready.go:82] duration metric: took 398.425721ms for pod "kube-controller-manager-ha-097312-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:39.162572  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-drj8m" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:39.358164  682373 request.go:632] Waited for 195.510394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-drj8m
	I0923 12:59:39.358250  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-drj8m
	I0923 12:59:39.358257  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:39.358268  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:39.358277  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:39.361850  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:39.558199  682373 request.go:632] Waited for 195.364547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:39.558282  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:39.558297  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:39.558307  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:39.558313  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:39.561590  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:39.562130  682373 pod_ready.go:93] pod "kube-proxy-drj8m" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:39.562153  682373 pod_ready.go:82] duration metric: took 399.573676ms for pod "kube-proxy-drj8m" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:39.562166  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vs524" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:39.758184  682373 request.go:632] Waited for 195.937914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vs524
	I0923 12:59:39.758247  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vs524
	I0923 12:59:39.758252  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:39.758259  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:39.758265  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:39.761790  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:39.957921  682373 request.go:632] Waited for 195.366189ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:39.957991  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:39.958005  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:39.958013  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:39.958019  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:39.962060  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:39.962614  682373 pod_ready.go:93] pod "kube-proxy-vs524" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:39.962646  682373 pod_ready.go:82] duration metric: took 400.470478ms for pod "kube-proxy-vs524" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:39.962661  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z6ss5" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:40.158575  682373 request.go:632] Waited for 195.810945ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z6ss5
	I0923 12:59:40.158664  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z6ss5
	I0923 12:59:40.158676  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:40.158687  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:40.158696  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:40.161968  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:40.358036  682373 request.go:632] Waited for 195.378024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:59:40.358107  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:59:40.358112  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:40.358120  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:40.358124  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:40.361928  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:40.362451  682373 pod_ready.go:93] pod "kube-proxy-z6ss5" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:40.362474  682373 pod_ready.go:82] duration metric: took 399.805025ms for pod "kube-proxy-z6ss5" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:40.362484  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:40.558528  682373 request.go:632] Waited for 195.950146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312
	I0923 12:59:40.558598  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312
	I0923 12:59:40.558612  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:40.558621  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:40.558625  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:40.562266  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:40.758487  682373 request.go:632] Waited for 195.542399ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:40.758572  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:40.758580  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:40.758591  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:40.758597  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:40.761825  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:40.762402  682373 pod_ready.go:93] pod "kube-scheduler-ha-097312" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:40.762425  682373 pod_ready.go:82] duration metric: took 399.935026ms for pod "kube-scheduler-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:40.762434  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:40.958691  682373 request.go:632] Waited for 196.142693ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312-m02
	I0923 12:59:40.958767  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312-m02
	I0923 12:59:40.958774  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:40.958782  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:40.958789  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:40.962833  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:41.157936  682373 request.go:632] Waited for 194.384412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:59:41.158022  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:59:41.158027  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:41.158035  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:41.158040  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:41.161682  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:41.162279  682373 pod_ready.go:93] pod "kube-scheduler-ha-097312-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:41.162303  682373 pod_ready.go:82] duration metric: took 399.860916ms for pod "kube-scheduler-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:41.162316  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-097312-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:41.358427  682373 request.go:632] Waited for 196.013005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312-m03
	I0923 12:59:41.358521  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312-m03
	I0923 12:59:41.358530  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:41.358541  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:41.358548  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:41.362666  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:41.557722  682373 request.go:632] Waited for 194.306447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:41.557785  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:41.557790  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:41.557799  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:41.557805  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:41.561165  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:41.561618  682373 pod_ready.go:93] pod "kube-scheduler-ha-097312-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:41.561638  682373 pod_ready.go:82] duration metric: took 399.3114ms for pod "kube-scheduler-ha-097312-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:41.561649  682373 pod_ready.go:39] duration metric: took 5.200192468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:59:41.561668  682373 api_server.go:52] waiting for apiserver process to appear ...
	I0923 12:59:41.561726  682373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:59:41.578487  682373 api_server.go:72] duration metric: took 22.999797093s to wait for apiserver process to appear ...
	I0923 12:59:41.578520  682373 api_server.go:88] waiting for apiserver healthz status ...
	I0923 12:59:41.578549  682373 api_server.go:253] Checking apiserver healthz at https://192.168.39.160:8443/healthz ...
	I0923 12:59:41.583195  682373 api_server.go:279] https://192.168.39.160:8443/healthz returned 200:
	ok
	I0923 12:59:41.583283  682373 round_trippers.go:463] GET https://192.168.39.160:8443/version
	I0923 12:59:41.583292  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:41.583300  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:41.583303  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:41.584184  682373 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0923 12:59:41.584348  682373 api_server.go:141] control plane version: v1.31.1
	I0923 12:59:41.584376  682373 api_server.go:131] duration metric: took 5.84872ms to wait for apiserver health ...
	I0923 12:59:41.584386  682373 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 12:59:41.757749  682373 request.go:632] Waited for 173.249304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:59:41.757819  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:59:41.757848  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:41.757861  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:41.757869  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:41.765026  682373 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:59:41.775103  682373 system_pods.go:59] 24 kube-system pods found
	I0923 12:59:41.775147  682373 system_pods.go:61] "coredns-7c65d6cfc9-6g9x2" [af485e47-0e78-483e-8f35-a7a4ab53f014] Running
	I0923 12:59:41.775153  682373 system_pods.go:61] "coredns-7c65d6cfc9-txcxz" [e6da5f25-f232-4649-9801-f3577210ea2e] Running
	I0923 12:59:41.775158  682373 system_pods.go:61] "etcd-ha-097312" [7f27c05d-176f-4397-8966-a2cc29556265] Running
	I0923 12:59:41.775162  682373 system_pods.go:61] "etcd-ha-097312-m02" [50d4b55f-31d3-4351-8574-506bbc4167d6] Running
	I0923 12:59:41.775166  682373 system_pods.go:61] "etcd-ha-097312-m03" [47812605-2ed5-49dc-acae-7b8ff115b1c5] Running
	I0923 12:59:41.775171  682373 system_pods.go:61] "kindnet-hcclj" [0e57c02a-6f9f-4829-9838-6bed660540a4] Running
	I0923 12:59:41.775176  682373 system_pods.go:61] "kindnet-j8l5t" [49216705-6e85-4b98-afbd-f4228b774321] Running
	I0923 12:59:41.775181  682373 system_pods.go:61] "kindnet-lcrdg" [fc7c4594-c83a-4254-a163-8f66b34c53c0] Running
	I0923 12:59:41.775186  682373 system_pods.go:61] "kube-apiserver-ha-097312" [4b8954a1-188a-4734-8e79-eace293c35e9] Running
	I0923 12:59:41.775191  682373 system_pods.go:61] "kube-apiserver-ha-097312-m02" [6022c193-400e-4641-8c4d-d24f0ce3e6ea] Running
	I0923 12:59:41.775195  682373 system_pods.go:61] "kube-apiserver-ha-097312-m03" [cfc94901-d0f5-4a59-a8d2-8841462a3166] Running
	I0923 12:59:41.775203  682373 system_pods.go:61] "kube-controller-manager-ha-097312" [c085db05-26f3-471b-baf1-f90cbfdacf19] Running
	I0923 12:59:41.775214  682373 system_pods.go:61] "kube-controller-manager-ha-097312-m02" [4cc903b8-c0c1-4ef7-9338-44af86be9280] Running
	I0923 12:59:41.775219  682373 system_pods.go:61] "kube-controller-manager-ha-097312-m03" [70886840-6967-4d3c-a0b7-e6448711e0cc] Running
	I0923 12:59:41.775224  682373 system_pods.go:61] "kube-proxy-drj8m" [a1c5535e-7139-441f-9065-ef7d147582d2] Running
	I0923 12:59:41.775249  682373 system_pods.go:61] "kube-proxy-vs524" [92738649-c52b-44d5-866b-8cda751a538c] Running
	I0923 12:59:41.775255  682373 system_pods.go:61] "kube-proxy-z6ss5" [7bff6204-a427-48c8-83a3-448ff1328b1b] Running
	I0923 12:59:41.775258  682373 system_pods.go:61] "kube-scheduler-ha-097312" [408ec8ae-eeca-4026-9582-45e7d209f09c] Running
	I0923 12:59:41.775264  682373 system_pods.go:61] "kube-scheduler-ha-097312-m02" [71e7793e-3d21-476a-84de-6fc84631e313] Running
	I0923 12:59:41.775268  682373 system_pods.go:61] "kube-scheduler-ha-097312-m03" [7811405d-6f57-440f-a9a2-178f2a094f61] Running
	I0923 12:59:41.775273  682373 system_pods.go:61] "kube-vip-ha-097312" [b26dfdf8-fa4b-4822-a88c-fe7af53be81b] Running
	I0923 12:59:41.775276  682373 system_pods.go:61] "kube-vip-ha-097312-m02" [910ae281-c533-4aa6-acb0-c1b69dddd842] Running
	I0923 12:59:41.775282  682373 system_pods.go:61] "kube-vip-ha-097312-m03" [1de093b7-e402-48af-ac83-09f59ffd213e] Running
	I0923 12:59:41.775287  682373 system_pods.go:61] "storage-provisioner" [0bbda806-091c-4e48-982a-296bbf03abd6] Running
	I0923 12:59:41.775297  682373 system_pods.go:74] duration metric: took 190.903005ms to wait for pod list to return data ...
	I0923 12:59:41.775310  682373 default_sa.go:34] waiting for default service account to be created ...
	I0923 12:59:41.957641  682373 request.go:632] Waited for 182.223415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/default/serviceaccounts
	I0923 12:59:41.957725  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/default/serviceaccounts
	I0923 12:59:41.957732  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:41.957741  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:41.957748  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:41.961638  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:41.961870  682373 default_sa.go:45] found service account: "default"
	I0923 12:59:41.961901  682373 default_sa.go:55] duration metric: took 186.579724ms for default service account to be created ...
	I0923 12:59:41.961914  682373 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 12:59:42.158106  682373 request.go:632] Waited for 196.090807ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:59:42.158184  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:59:42.158191  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:42.158202  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:42.158209  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:42.163268  682373 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:59:42.169516  682373 system_pods.go:86] 24 kube-system pods found
	I0923 12:59:42.169555  682373 system_pods.go:89] "coredns-7c65d6cfc9-6g9x2" [af485e47-0e78-483e-8f35-a7a4ab53f014] Running
	I0923 12:59:42.169562  682373 system_pods.go:89] "coredns-7c65d6cfc9-txcxz" [e6da5f25-f232-4649-9801-f3577210ea2e] Running
	I0923 12:59:42.169566  682373 system_pods.go:89] "etcd-ha-097312" [7f27c05d-176f-4397-8966-a2cc29556265] Running
	I0923 12:59:42.169570  682373 system_pods.go:89] "etcd-ha-097312-m02" [50d4b55f-31d3-4351-8574-506bbc4167d6] Running
	I0923 12:59:42.169574  682373 system_pods.go:89] "etcd-ha-097312-m03" [47812605-2ed5-49dc-acae-7b8ff115b1c5] Running
	I0923 12:59:42.169578  682373 system_pods.go:89] "kindnet-hcclj" [0e57c02a-6f9f-4829-9838-6bed660540a4] Running
	I0923 12:59:42.169582  682373 system_pods.go:89] "kindnet-j8l5t" [49216705-6e85-4b98-afbd-f4228b774321] Running
	I0923 12:59:42.169587  682373 system_pods.go:89] "kindnet-lcrdg" [fc7c4594-c83a-4254-a163-8f66b34c53c0] Running
	I0923 12:59:42.169596  682373 system_pods.go:89] "kube-apiserver-ha-097312" [4b8954a1-188a-4734-8e79-eace293c35e9] Running
	I0923 12:59:42.169603  682373 system_pods.go:89] "kube-apiserver-ha-097312-m02" [6022c193-400e-4641-8c4d-d24f0ce3e6ea] Running
	I0923 12:59:42.169609  682373 system_pods.go:89] "kube-apiserver-ha-097312-m03" [cfc94901-d0f5-4a59-a8d2-8841462a3166] Running
	I0923 12:59:42.169617  682373 system_pods.go:89] "kube-controller-manager-ha-097312" [c085db05-26f3-471b-baf1-f90cbfdacf19] Running
	I0923 12:59:42.169629  682373 system_pods.go:89] "kube-controller-manager-ha-097312-m02" [4cc903b8-c0c1-4ef7-9338-44af86be9280] Running
	I0923 12:59:42.169636  682373 system_pods.go:89] "kube-controller-manager-ha-097312-m03" [70886840-6967-4d3c-a0b7-e6448711e0cc] Running
	I0923 12:59:42.169643  682373 system_pods.go:89] "kube-proxy-drj8m" [a1c5535e-7139-441f-9065-ef7d147582d2] Running
	I0923 12:59:42.169653  682373 system_pods.go:89] "kube-proxy-vs524" [92738649-c52b-44d5-866b-8cda751a538c] Running
	I0923 12:59:42.169657  682373 system_pods.go:89] "kube-proxy-z6ss5" [7bff6204-a427-48c8-83a3-448ff1328b1b] Running
	I0923 12:59:42.169661  682373 system_pods.go:89] "kube-scheduler-ha-097312" [408ec8ae-eeca-4026-9582-45e7d209f09c] Running
	I0923 12:59:42.169665  682373 system_pods.go:89] "kube-scheduler-ha-097312-m02" [71e7793e-3d21-476a-84de-6fc84631e313] Running
	I0923 12:59:42.169669  682373 system_pods.go:89] "kube-scheduler-ha-097312-m03" [7811405d-6f57-440f-a9a2-178f2a094f61] Running
	I0923 12:59:42.169672  682373 system_pods.go:89] "kube-vip-ha-097312" [b26dfdf8-fa4b-4822-a88c-fe7af53be81b] Running
	I0923 12:59:42.169679  682373 system_pods.go:89] "kube-vip-ha-097312-m02" [910ae281-c533-4aa6-acb0-c1b69dddd842] Running
	I0923 12:59:42.169684  682373 system_pods.go:89] "kube-vip-ha-097312-m03" [1de093b7-e402-48af-ac83-09f59ffd213e] Running
	I0923 12:59:42.169687  682373 system_pods.go:89] "storage-provisioner" [0bbda806-091c-4e48-982a-296bbf03abd6] Running
	I0923 12:59:42.169694  682373 system_pods.go:126] duration metric: took 207.772669ms to wait for k8s-apps to be running ...
	I0923 12:59:42.169708  682373 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 12:59:42.169771  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:59:42.186008  682373 system_svc.go:56] duration metric: took 16.290747ms WaitForService to wait for kubelet
	I0923 12:59:42.186050  682373 kubeadm.go:582] duration metric: took 23.607368403s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:59:42.186083  682373 node_conditions.go:102] verifying NodePressure condition ...
	I0923 12:59:42.358541  682373 request.go:632] Waited for 172.350275ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes
	I0923 12:59:42.358620  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes
	I0923 12:59:42.358625  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:42.358634  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:42.358638  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:42.361922  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:42.362876  682373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:59:42.362900  682373 node_conditions.go:123] node cpu capacity is 2
	I0923 12:59:42.362911  682373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:59:42.362914  682373 node_conditions.go:123] node cpu capacity is 2
	I0923 12:59:42.362918  682373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:59:42.362921  682373 node_conditions.go:123] node cpu capacity is 2
	I0923 12:59:42.362925  682373 node_conditions.go:105] duration metric: took 176.836519ms to run NodePressure ...
	I0923 12:59:42.362937  682373 start.go:241] waiting for startup goroutines ...
	I0923 12:59:42.362958  682373 start.go:255] writing updated cluster config ...
	I0923 12:59:42.363261  682373 ssh_runner.go:195] Run: rm -f paused
	I0923 12:59:42.417533  682373 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 12:59:42.419577  682373 out.go:177] * Done! kubectl is now configured to use "ha-097312" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 23 13:03:26 ha-097312 crio[666]: time="2024-09-23 13:03:26.966415025Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096606966394550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce6bf1be-d2d2-45be-a906-404f9b213851 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:03:26 ha-097312 crio[666]: time="2024-09-23 13:03:26.967046740Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2f844e5-4f59-4e50-9fa5-94661a7c0349 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:26 ha-097312 crio[666]: time="2024-09-23 13:03:26.967101105Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2f844e5-4f59-4e50-9fa5-94661a7c0349 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:26 ha-097312 crio[666]: time="2024-09-23 13:03:26.967337060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0c8b3d3e1c9604dd8d7d45c15c2a91a759a62f04a047e5626d57a757a396bd4b,PodSandboxId:01a99cef826dda6f2b65d379c041e96505aa2085b58dd4630a3ae2c0052d503b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727096387328810156,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070d45bce8ff98c35a7d8c06328c902bd260bbcd49c6d8b65acf5f2fe3670f05,PodSandboxId:287ae69fbba66da4b73f16d080fbf336ffcfc42104571090400deb8b10a0a4f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727096240448358828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6494b72ca963ec5a21179322ce5a1a3cd2ecf6063d12290ea8c06659ede25828,PodSandboxId:09f40d2b506132af296453dc4125d2ff70d789a87f1da351ae25a90c863e1c5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096240450387241,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cead05960724ef0a7c164689c7f077c5173bf75483e09a02ea44bf3b5dde8cab,PodSandboxId:d6346e81a93e3ab149256d0f37fd69af6c44f91e6e6662b3720a7bd343554d66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096240372155642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e
78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03670fd92c8a80c9d88e88b722428ce8ea7ed15a32a25c8c4c948685c15fe41c,PodSandboxId:fa074de98ab0bb7558595bb7900fab097f2fa4cf091ae0c9ed5fd5c899cc2044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17270962
28373682156,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b6ad938698e107c07b01a67dcc4f6f6f2895a6b2ddc7a269056adab117c0ce,PodSandboxId:8efd7c52e41eb6dd5b30df6dc0b133cb2ffabe08abf473da0e79edcf137bc745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727096228199432737,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5095373416a8e45324449515c2fa18882a4b643648236860681c27f7f589bdb,PodSandboxId:a65df228e8bfd8d4d6a9b85c6cbab162a4a128e8612cbb781b68b21b0f017fe2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727096218413186299,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 517a285369c2d468692e1e5ab2e508d6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfbdbe2c35f63b185f28992c717601392287e693216d7332cfd0b4b6597c8ad,PodSandboxId:46a49b5018b58cc60ab2c080f685d00c187e33e4c7790af775ed5baf71aefdca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727096215629421014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c9e8fb5e944bc800446956248067c039e5c452de2651adf100841c5f062a431,PodSandboxId:e4cdc1cb583f42c1cf64e136ebe20075107963fc13da9144c568b67897e7e8a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727096215612519576,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c28bf3f4d80d4048804c687d1cec38aff92ff01ac7556fbe59fd2c73324b333,PodSandboxId:66109e91b1f789d247a6b16e21533a1c912ebdf0386ca6f2b2a221f5a873a754,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727096215567156911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476ad705f89683694506883a4ac379c2339d6097875e3a88c66a078cec041492,PodSandboxId:d5fd7dbc75ab3b9c7a6cdfac29a7ad6d6d093ed1004322d9f8640bbfe66c5388,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727096215548571609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f2f844e5-4f59-4e50-9fa5-94661a7c0349 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:27 ha-097312 crio[666]: time="2024-09-23 13:03:27.005877606Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9921c33f-fde7-4877-afb3-0115b43691b5 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:03:27 ha-097312 crio[666]: time="2024-09-23 13:03:27.005953662Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9921c33f-fde7-4877-afb3-0115b43691b5 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:03:27 ha-097312 crio[666]: time="2024-09-23 13:03:27.011749756Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2edae552-82ab-462d-b7ce-af0faa4c1e7f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:03:27 ha-097312 crio[666]: time="2024-09-23 13:03:27.012181545Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096607012148653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2edae552-82ab-462d-b7ce-af0faa4c1e7f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:03:27 ha-097312 crio[666]: time="2024-09-23 13:03:27.012882567Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c615986-230a-4acf-8da7-2c68d3ca36d3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:27 ha-097312 crio[666]: time="2024-09-23 13:03:27.012938806Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c615986-230a-4acf-8da7-2c68d3ca36d3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:27 ha-097312 crio[666]: time="2024-09-23 13:03:27.013167357Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0c8b3d3e1c9604dd8d7d45c15c2a91a759a62f04a047e5626d57a757a396bd4b,PodSandboxId:01a99cef826dda6f2b65d379c041e96505aa2085b58dd4630a3ae2c0052d503b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727096387328810156,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070d45bce8ff98c35a7d8c06328c902bd260bbcd49c6d8b65acf5f2fe3670f05,PodSandboxId:287ae69fbba66da4b73f16d080fbf336ffcfc42104571090400deb8b10a0a4f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727096240448358828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6494b72ca963ec5a21179322ce5a1a3cd2ecf6063d12290ea8c06659ede25828,PodSandboxId:09f40d2b506132af296453dc4125d2ff70d789a87f1da351ae25a90c863e1c5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096240450387241,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cead05960724ef0a7c164689c7f077c5173bf75483e09a02ea44bf3b5dde8cab,PodSandboxId:d6346e81a93e3ab149256d0f37fd69af6c44f91e6e6662b3720a7bd343554d66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096240372155642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e
78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03670fd92c8a80c9d88e88b722428ce8ea7ed15a32a25c8c4c948685c15fe41c,PodSandboxId:fa074de98ab0bb7558595bb7900fab097f2fa4cf091ae0c9ed5fd5c899cc2044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17270962
28373682156,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b6ad938698e107c07b01a67dcc4f6f6f2895a6b2ddc7a269056adab117c0ce,PodSandboxId:8efd7c52e41eb6dd5b30df6dc0b133cb2ffabe08abf473da0e79edcf137bc745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727096228199432737,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5095373416a8e45324449515c2fa18882a4b643648236860681c27f7f589bdb,PodSandboxId:a65df228e8bfd8d4d6a9b85c6cbab162a4a128e8612cbb781b68b21b0f017fe2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727096218413186299,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 517a285369c2d468692e1e5ab2e508d6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfbdbe2c35f63b185f28992c717601392287e693216d7332cfd0b4b6597c8ad,PodSandboxId:46a49b5018b58cc60ab2c080f685d00c187e33e4c7790af775ed5baf71aefdca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727096215629421014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c9e8fb5e944bc800446956248067c039e5c452de2651adf100841c5f062a431,PodSandboxId:e4cdc1cb583f42c1cf64e136ebe20075107963fc13da9144c568b67897e7e8a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727096215612519576,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c28bf3f4d80d4048804c687d1cec38aff92ff01ac7556fbe59fd2c73324b333,PodSandboxId:66109e91b1f789d247a6b16e21533a1c912ebdf0386ca6f2b2a221f5a873a754,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727096215567156911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476ad705f89683694506883a4ac379c2339d6097875e3a88c66a078cec041492,PodSandboxId:d5fd7dbc75ab3b9c7a6cdfac29a7ad6d6d093ed1004322d9f8640bbfe66c5388,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727096215548571609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c615986-230a-4acf-8da7-2c68d3ca36d3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:27 ha-097312 crio[666]: time="2024-09-23 13:03:27.051161426Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=56eb9153-b3be-487b-965b-05a35a732dd9 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:03:27 ha-097312 crio[666]: time="2024-09-23 13:03:27.051254060Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=56eb9153-b3be-487b-965b-05a35a732dd9 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:03:27 ha-097312 crio[666]: time="2024-09-23 13:03:27.052539028Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24c3730c-3843-4668-adcb-c273735459c1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:03:27 ha-097312 crio[666]: time="2024-09-23 13:03:27.053215687Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096607053191522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24c3730c-3843-4668-adcb-c273735459c1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:03:27 ha-097312 crio[666]: time="2024-09-23 13:03:27.053967900Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da4a031c-d69e-4697-a8ab-c146fd0f94d1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:27 ha-097312 crio[666]: time="2024-09-23 13:03:27.054038463Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da4a031c-d69e-4697-a8ab-c146fd0f94d1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:27 ha-097312 crio[666]: time="2024-09-23 13:03:27.054285695Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0c8b3d3e1c9604dd8d7d45c15c2a91a759a62f04a047e5626d57a757a396bd4b,PodSandboxId:01a99cef826dda6f2b65d379c041e96505aa2085b58dd4630a3ae2c0052d503b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727096387328810156,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070d45bce8ff98c35a7d8c06328c902bd260bbcd49c6d8b65acf5f2fe3670f05,PodSandboxId:287ae69fbba66da4b73f16d080fbf336ffcfc42104571090400deb8b10a0a4f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727096240448358828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6494b72ca963ec5a21179322ce5a1a3cd2ecf6063d12290ea8c06659ede25828,PodSandboxId:09f40d2b506132af296453dc4125d2ff70d789a87f1da351ae25a90c863e1c5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096240450387241,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cead05960724ef0a7c164689c7f077c5173bf75483e09a02ea44bf3b5dde8cab,PodSandboxId:d6346e81a93e3ab149256d0f37fd69af6c44f91e6e6662b3720a7bd343554d66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096240372155642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e
78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03670fd92c8a80c9d88e88b722428ce8ea7ed15a32a25c8c4c948685c15fe41c,PodSandboxId:fa074de98ab0bb7558595bb7900fab097f2fa4cf091ae0c9ed5fd5c899cc2044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17270962
28373682156,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b6ad938698e107c07b01a67dcc4f6f6f2895a6b2ddc7a269056adab117c0ce,PodSandboxId:8efd7c52e41eb6dd5b30df6dc0b133cb2ffabe08abf473da0e79edcf137bc745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727096228199432737,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5095373416a8e45324449515c2fa18882a4b643648236860681c27f7f589bdb,PodSandboxId:a65df228e8bfd8d4d6a9b85c6cbab162a4a128e8612cbb781b68b21b0f017fe2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727096218413186299,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 517a285369c2d468692e1e5ab2e508d6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfbdbe2c35f63b185f28992c717601392287e693216d7332cfd0b4b6597c8ad,PodSandboxId:46a49b5018b58cc60ab2c080f685d00c187e33e4c7790af775ed5baf71aefdca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727096215629421014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c9e8fb5e944bc800446956248067c039e5c452de2651adf100841c5f062a431,PodSandboxId:e4cdc1cb583f42c1cf64e136ebe20075107963fc13da9144c568b67897e7e8a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727096215612519576,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c28bf3f4d80d4048804c687d1cec38aff92ff01ac7556fbe59fd2c73324b333,PodSandboxId:66109e91b1f789d247a6b16e21533a1c912ebdf0386ca6f2b2a221f5a873a754,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727096215567156911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476ad705f89683694506883a4ac379c2339d6097875e3a88c66a078cec041492,PodSandboxId:d5fd7dbc75ab3b9c7a6cdfac29a7ad6d6d093ed1004322d9f8640bbfe66c5388,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727096215548571609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da4a031c-d69e-4697-a8ab-c146fd0f94d1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:27 ha-097312 crio[666]: time="2024-09-23 13:03:27.092347997Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ef0b18b4-f9c0-49b0-bbc9-8fe5a5e996d4 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:03:27 ha-097312 crio[666]: time="2024-09-23 13:03:27.092434542Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ef0b18b4-f9c0-49b0-bbc9-8fe5a5e996d4 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:03:27 ha-097312 crio[666]: time="2024-09-23 13:03:27.093465571Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=37e8990c-5054-4f1b-9773-da41e965555d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:03:27 ha-097312 crio[666]: time="2024-09-23 13:03:27.093972319Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096607093945704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37e8990c-5054-4f1b-9773-da41e965555d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:03:27 ha-097312 crio[666]: time="2024-09-23 13:03:27.094685289Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=49c6b572-d62d-4cd2-82f4-9e0e337a6d57 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:27 ha-097312 crio[666]: time="2024-09-23 13:03:27.094743366Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=49c6b572-d62d-4cd2-82f4-9e0e337a6d57 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:27 ha-097312 crio[666]: time="2024-09-23 13:03:27.095014619Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0c8b3d3e1c9604dd8d7d45c15c2a91a759a62f04a047e5626d57a757a396bd4b,PodSandboxId:01a99cef826dda6f2b65d379c041e96505aa2085b58dd4630a3ae2c0052d503b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727096387328810156,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070d45bce8ff98c35a7d8c06328c902bd260bbcd49c6d8b65acf5f2fe3670f05,PodSandboxId:287ae69fbba66da4b73f16d080fbf336ffcfc42104571090400deb8b10a0a4f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727096240448358828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6494b72ca963ec5a21179322ce5a1a3cd2ecf6063d12290ea8c06659ede25828,PodSandboxId:09f40d2b506132af296453dc4125d2ff70d789a87f1da351ae25a90c863e1c5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096240450387241,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cead05960724ef0a7c164689c7f077c5173bf75483e09a02ea44bf3b5dde8cab,PodSandboxId:d6346e81a93e3ab149256d0f37fd69af6c44f91e6e6662b3720a7bd343554d66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096240372155642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e
78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03670fd92c8a80c9d88e88b722428ce8ea7ed15a32a25c8c4c948685c15fe41c,PodSandboxId:fa074de98ab0bb7558595bb7900fab097f2fa4cf091ae0c9ed5fd5c899cc2044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17270962
28373682156,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b6ad938698e107c07b01a67dcc4f6f6f2895a6b2ddc7a269056adab117c0ce,PodSandboxId:8efd7c52e41eb6dd5b30df6dc0b133cb2ffabe08abf473da0e79edcf137bc745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727096228199432737,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5095373416a8e45324449515c2fa18882a4b643648236860681c27f7f589bdb,PodSandboxId:a65df228e8bfd8d4d6a9b85c6cbab162a4a128e8612cbb781b68b21b0f017fe2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727096218413186299,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 517a285369c2d468692e1e5ab2e508d6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfbdbe2c35f63b185f28992c717601392287e693216d7332cfd0b4b6597c8ad,PodSandboxId:46a49b5018b58cc60ab2c080f685d00c187e33e4c7790af775ed5baf71aefdca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727096215629421014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c9e8fb5e944bc800446956248067c039e5c452de2651adf100841c5f062a431,PodSandboxId:e4cdc1cb583f42c1cf64e136ebe20075107963fc13da9144c568b67897e7e8a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727096215612519576,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c28bf3f4d80d4048804c687d1cec38aff92ff01ac7556fbe59fd2c73324b333,PodSandboxId:66109e91b1f789d247a6b16e21533a1c912ebdf0386ca6f2b2a221f5a873a754,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727096215567156911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476ad705f89683694506883a4ac379c2339d6097875e3a88c66a078cec041492,PodSandboxId:d5fd7dbc75ab3b9c7a6cdfac29a7ad6d6d093ed1004322d9f8640bbfe66c5388,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727096215548571609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=49c6b572-d62d-4cd2-82f4-9e0e337a6d57 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0c8b3d3e1c960       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   01a99cef826dd       busybox-7dff88458-4rksx
	6494b72ca963e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   09f40d2b50613       coredns-7c65d6cfc9-txcxz
	070d45bce8ff9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   287ae69fbba66       storage-provisioner
	cead05960724e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   d6346e81a93e3       coredns-7c65d6cfc9-6g9x2
	03670fd92c8a8       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   fa074de98ab0b       kindnet-j8l5t
	37b6ad938698e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   8efd7c52e41eb       kube-proxy-drj8m
	e5095373416a8       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   a65df228e8bfd       kube-vip-ha-097312
	9bfbdbe2c35f6       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   46a49b5018b58       etcd-ha-097312
	5c9e8fb5e944b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   e4cdc1cb583f4       kube-scheduler-ha-097312
	1c28bf3f4d80d       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   66109e91b1f78       kube-apiserver-ha-097312
	476ad705f8968       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   d5fd7dbc75ab3       kube-controller-manager-ha-097312
	
	
	==> coredns [6494b72ca963ec5a21179322ce5a1a3cd2ecf6063d12290ea8c06659ede25828] <==
	[INFO] 10.244.1.2:45817 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000653057s
	[INFO] 10.244.1.2:52272 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.003009815s
	[INFO] 10.244.0.4:33030 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115409s
	[INFO] 10.244.0.4:45577 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003386554s
	[INFO] 10.244.0.4:34507 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148722s
	[INFO] 10.244.0.4:56395 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000159124s
	[INFO] 10.244.2.2:48128 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168767s
	[INFO] 10.244.2.2:38686 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001366329s
	[INFO] 10.244.2.2:54280 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098386s
	[INFO] 10.244.2.2:36178 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083893s
	[INFO] 10.244.1.2:36479 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151724s
	[INFO] 10.244.1.2:52581 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000183399s
	[INFO] 10.244.1.2:36358 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00015472s
	[INFO] 10.244.0.4:37418 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198313s
	[INFO] 10.244.2.2:52660 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011216s
	[INFO] 10.244.1.2:33460 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123493s
	[INFO] 10.244.1.2:42619 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000187646s
	[INFO] 10.244.0.4:50282 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110854s
	[INFO] 10.244.0.4:48865 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000169177s
	[INFO] 10.244.0.4:52671 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110814s
	[INFO] 10.244.2.2:49013 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000236486s
	[INFO] 10.244.2.2:37600 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000236051s
	[INFO] 10.244.2.2:54687 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000137539s
	[INFO] 10.244.1.2:37754 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000237319s
	[INFO] 10.244.1.2:50571 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000167449s
	
	
	==> coredns [cead05960724ef0a7c164689c7f077c5173bf75483e09a02ea44bf3b5dde8cab] <==
	[INFO] 10.244.0.4:37338 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004244948s
	[INFO] 10.244.0.4:45643 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000226629s
	[INFO] 10.244.0.4:55589 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138142s
	[INFO] 10.244.0.4:39714 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089285s
	[INFO] 10.244.2.2:36050 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198766s
	[INFO] 10.244.2.2:57929 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002002291s
	[INFO] 10.244.2.2:39920 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000241567s
	[INFO] 10.244.2.2:40496 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084082s
	[INFO] 10.244.1.2:53956 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001953841s
	[INFO] 10.244.1.2:39693 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161735s
	[INFO] 10.244.1.2:59255 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001392042s
	[INFO] 10.244.1.2:33162 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000137674s
	[INFO] 10.244.1.2:56819 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135224s
	[INFO] 10.244.0.4:58065 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142108s
	[INFO] 10.244.0.4:49950 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114547s
	[INFO] 10.244.0.4:48467 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051186s
	[INFO] 10.244.2.2:57485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120774s
	[INFO] 10.244.2.2:47368 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105596s
	[INFO] 10.244.2.2:52953 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077623s
	[INFO] 10.244.1.2:45470 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011128s
	[INFO] 10.244.1.2:35601 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000157053s
	[INFO] 10.244.0.4:60925 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000610878s
	[INFO] 10.244.2.2:48335 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000176802s
	[INFO] 10.244.1.2:39758 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190843s
	[INFO] 10.244.1.2:35713 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110523s
	
	
	==> describe nodes <==
	Name:               ha-097312
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-097312
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-097312
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T12_57_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:57:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-097312
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:03:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:00:05 +0000   Mon, 23 Sep 2024 12:57:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:00:05 +0000   Mon, 23 Sep 2024 12:57:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:00:05 +0000   Mon, 23 Sep 2024 12:57:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:00:05 +0000   Mon, 23 Sep 2024 12:57:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.160
	  Hostname:    ha-097312
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fef43eb48e8a42b5815ed7c921d42333
	  System UUID:                fef43eb4-8e8a-42b5-815e-d7c921d42333
	  Boot ID:                    22749ef5-5a8a-4d9f-b42e-96dd2d4e32eb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4rksx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 coredns-7c65d6cfc9-6g9x2             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m21s
	  kube-system                 coredns-7c65d6cfc9-txcxz             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m21s
	  kube-system                 etcd-ha-097312                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m25s
	  kube-system                 kindnet-j8l5t                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m21s
	  kube-system                 kube-apiserver-ha-097312             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-controller-manager-ha-097312    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-proxy-drj8m                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-scheduler-ha-097312             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-vip-ha-097312                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m18s  kube-proxy       
	  Normal  Starting                 6m25s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m25s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m25s  kubelet          Node ha-097312 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m25s  kubelet          Node ha-097312 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m25s  kubelet          Node ha-097312 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m22s  node-controller  Node ha-097312 event: Registered Node ha-097312 in Controller
	  Normal  NodeReady                6m8s   kubelet          Node ha-097312 status is now: NodeReady
	  Normal  RegisteredNode           5m22s  node-controller  Node ha-097312 event: Registered Node ha-097312 in Controller
	  Normal  RegisteredNode           4m4s   node-controller  Node ha-097312 event: Registered Node ha-097312 in Controller
	
	
	Name:               ha-097312-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-097312-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-097312
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T12_57_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:57:57 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-097312-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:01:01 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 23 Sep 2024 12:59:59 +0000   Mon, 23 Sep 2024 13:01:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 23 Sep 2024 12:59:59 +0000   Mon, 23 Sep 2024 13:01:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 23 Sep 2024 12:59:59 +0000   Mon, 23 Sep 2024 13:01:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 23 Sep 2024 12:59:59 +0000   Mon, 23 Sep 2024 13:01:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.214
	  Hostname:    ha-097312-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 226ea4f6db5b44f7bdab73033cb7ae33
	  System UUID:                226ea4f6-db5b-44f7-bdab-73033cb7ae33
	  Boot ID:                    8cb64dab-25d7-4dcd-9c08-1dcc2d214767
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wz97n                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 etcd-ha-097312-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m28s
	  kube-system                 kindnet-hcclj                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m30s
	  kube-system                 kube-apiserver-ha-097312-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-controller-manager-ha-097312-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-proxy-z6ss5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-scheduler-ha-097312-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-vip-ha-097312-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m26s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m30s (x8 over 5m31s)  kubelet          Node ha-097312-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m30s (x8 over 5m31s)  kubelet          Node ha-097312-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m30s (x7 over 5m31s)  kubelet          Node ha-097312-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m27s                  node-controller  Node ha-097312-m02 event: Registered Node ha-097312-m02 in Controller
	  Normal  RegisteredNode           5m22s                  node-controller  Node ha-097312-m02 event: Registered Node ha-097312-m02 in Controller
	  Normal  RegisteredNode           4m4s                   node-controller  Node ha-097312-m02 event: Registered Node ha-097312-m02 in Controller
	  Normal  NodeNotReady             104s                   node-controller  Node ha-097312-m02 status is now: NodeNotReady
	
	
	Name:               ha-097312-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-097312-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-097312
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T12_59_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:59:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-097312-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:03:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:00:15 +0000   Mon, 23 Sep 2024 12:59:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:00:15 +0000   Mon, 23 Sep 2024 12:59:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:00:15 +0000   Mon, 23 Sep 2024 12:59:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:00:15 +0000   Mon, 23 Sep 2024 12:59:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.174
	  Hostname:    ha-097312-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 21b2a00385684360824371ae7a980598
	  System UUID:                21b2a003-8568-4360-8243-71ae7a980598
	  Boot ID:                    960c8b17-8be2-4e75-85e5-dc8c84a6f034
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-tx8b9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 etcd-ha-097312-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m11s
	  kube-system                 kindnet-lcrdg                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m13s
	  kube-system                 kube-apiserver-ha-097312-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-controller-manager-ha-097312-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-proxy-vs524                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-scheduler-ha-097312-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-vip-ha-097312-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m13s (x8 over 4m13s)  kubelet          Node ha-097312-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m13s (x8 over 4m13s)  kubelet          Node ha-097312-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m13s (x7 over 4m13s)  kubelet          Node ha-097312-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-097312-m03 event: Registered Node ha-097312-m03 in Controller
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-097312-m03 event: Registered Node ha-097312-m03 in Controller
	  Normal  RegisteredNode           4m4s                   node-controller  Node ha-097312-m03 event: Registered Node ha-097312-m03 in Controller
	
	
	Name:               ha-097312-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-097312-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-097312
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T13_00_25_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 13:00:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-097312-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:03:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:00:55 +0000   Mon, 23 Sep 2024 13:00:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:00:55 +0000   Mon, 23 Sep 2024 13:00:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:00:55 +0000   Mon, 23 Sep 2024 13:00:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:00:55 +0000   Mon, 23 Sep 2024 13:00:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.20
	  Hostname:    ha-097312-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 23903b49596849ed8163495c455231a4
	  System UUID:                23903b49-5968-49ed-8163-495c455231a4
	  Boot ID:                    b209787f-e977-446d-9180-ea83c0a28142
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pzs94       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m2s
	  kube-system                 kube-proxy-7hlnw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-097312-m04 event: Registered Node ha-097312-m04 in Controller
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-097312-m04 event: Registered Node ha-097312-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m2s (x2 over 3m3s)  kubelet          Node ha-097312-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x2 over 3m3s)  kubelet          Node ha-097312-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x2 over 3m3s)  kubelet          Node ha-097312-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-097312-m04 event: Registered Node ha-097312-m04 in Controller
	  Normal  NodeReady                2m42s                kubelet          Node ha-097312-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep23 12:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052097] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038111] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.768653] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.021290] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.561361] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.704633] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.056129] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055848] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.170191] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.146996] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.300750] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +3.930853] systemd-fstab-generator[752]: Ignoring "noauto" option for root device
	[  +3.791133] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.059635] kauditd_printk_skb: 158 callbacks suppressed
	[Sep23 12:57] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.088641] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.268527] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.165221] kauditd_printk_skb: 38 callbacks suppressed
	[Sep23 12:58] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [9bfbdbe2c35f63b185f28992c717601392287e693216d7332cfd0b4b6597c8ad] <==
	{"level":"warn","ts":"2024-09-23T13:03:27.386307Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:27.402798Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:27.412759Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:27.419521Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:27.424114Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:27.427753Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:27.434891Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:27.435784Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:27.444329Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:27.453336Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:27.457520Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:27.461021Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:27.483936Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:27.495388Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:27.501453Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:27.508081Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:27.511440Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:27.512723Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:27.515991Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:27.519510Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:27.527146Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:27.534456Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:27.548887Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e470b762e3b365ab","rtt":"971.495µs","error":"dial tcp 192.168.39.214:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-23T13:03:27.549004Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e470b762e3b365ab","rtt":"8.776689ms","error":"dial tcp 192.168.39.214:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-23T13:03:27.583450Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 13:03:27 up 7 min,  0 users,  load average: 0.15, 0.24, 0.13
	Linux ha-097312 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [03670fd92c8a80c9d88e88b722428ce8ea7ed15a32a25c8c4c948685c15fe41c] <==
	I0923 13:02:49.637912       1 main.go:299] handling current node
	I0923 13:02:59.639086       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0923 13:02:59.639197       1 main.go:299] handling current node
	I0923 13:02:59.639228       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0923 13:02:59.639248       1 main.go:322] Node ha-097312-m02 has CIDR [10.244.1.0/24] 
	I0923 13:02:59.639404       1 main.go:295] Handling node with IPs: map[192.168.39.174:{}]
	I0923 13:02:59.639427       1 main.go:322] Node ha-097312-m03 has CIDR [10.244.2.0/24] 
	I0923 13:02:59.639504       1 main.go:295] Handling node with IPs: map[192.168.39.20:{}]
	I0923 13:02:59.639523       1 main.go:322] Node ha-097312-m04 has CIDR [10.244.3.0/24] 
	I0923 13:03:09.635495       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0923 13:03:09.635583       1 main.go:299] handling current node
	I0923 13:03:09.635613       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0923 13:03:09.635671       1 main.go:322] Node ha-097312-m02 has CIDR [10.244.1.0/24] 
	I0923 13:03:09.635944       1 main.go:295] Handling node with IPs: map[192.168.39.174:{}]
	I0923 13:03:09.635992       1 main.go:322] Node ha-097312-m03 has CIDR [10.244.2.0/24] 
	I0923 13:03:09.636057       1 main.go:295] Handling node with IPs: map[192.168.39.20:{}]
	I0923 13:03:09.636075       1 main.go:322] Node ha-097312-m04 has CIDR [10.244.3.0/24] 
	I0923 13:03:19.639090       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0923 13:03:19.639126       1 main.go:299] handling current node
	I0923 13:03:19.639140       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0923 13:03:19.639145       1 main.go:322] Node ha-097312-m02 has CIDR [10.244.1.0/24] 
	I0923 13:03:19.639271       1 main.go:295] Handling node with IPs: map[192.168.39.174:{}]
	I0923 13:03:19.639276       1 main.go:322] Node ha-097312-m03 has CIDR [10.244.2.0/24] 
	I0923 13:03:19.639330       1 main.go:295] Handling node with IPs: map[192.168.39.20:{}]
	I0923 13:03:19.639334       1 main.go:322] Node ha-097312-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [1c28bf3f4d80d4048804c687d1cec38aff92ff01ac7556fbe59fd2c73324b333] <==
	I0923 12:57:02.020359       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0923 12:57:02.088327       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0923 12:57:06.152802       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0923 12:57:06.755775       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0923 12:57:57.925529       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0923 12:57:57.925590       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 6.353µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0923 12:57:57.926736       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0923 12:57:57.927891       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0923 12:57:57.929106       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.691541ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0923 12:59:48.392448       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33954: use of closed network connection
	E0923 12:59:48.613880       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33958: use of closed network connection
	E0923 12:59:48.808088       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58634: use of closed network connection
	E0923 12:59:49.001780       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58648: use of closed network connection
	E0923 12:59:49.197483       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58666: use of closed network connection
	E0923 12:59:49.377774       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58694: use of closed network connection
	E0923 12:59:49.575983       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58712: use of closed network connection
	E0923 12:59:49.768426       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58734: use of closed network connection
	E0923 12:59:49.967451       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58756: use of closed network connection
	E0923 12:59:50.265392       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58784: use of closed network connection
	E0923 12:59:50.450981       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58804: use of closed network connection
	E0923 12:59:50.652809       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58810: use of closed network connection
	E0923 12:59:50.861752       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58822: use of closed network connection
	E0923 12:59:51.064797       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58830: use of closed network connection
	E0923 12:59:51.264921       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58846: use of closed network connection
	W0923 13:01:20.906998       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.160 192.168.39.174]
	
	
	==> kube-controller-manager [476ad705f89683694506883a4ac379c2339d6097875e3a88c66a078cec041492] <==
	I0923 13:00:25.249956       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-097312-m04" podCIDRs=["10.244.3.0/24"]
	I0923 13:00:25.250021       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:25.250063       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:25.268205       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:25.370449       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:25.456902       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:25.813447       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:25.983304       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-097312-m04"
	I0923 13:00:25.983773       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:26.090111       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:28.408814       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:28.484815       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:35.660172       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:45.897287       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:45.897415       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-097312-m04"
	I0923 13:00:45.912394       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:46.005249       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:55.964721       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:01:43.436073       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m02"
	I0923 13:01:43.436177       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-097312-m04"
	I0923 13:01:43.460744       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m02"
	I0923 13:01:43.587511       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.511152ms"
	I0923 13:01:43.588537       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.099µs"
	I0923 13:01:46.104982       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m02"
	I0923 13:01:48.741428       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m02"
	
	
	==> kube-proxy [37b6ad938698e107c07b01a67dcc4f6f6f2895a6b2ddc7a269056adab117c0ce] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 12:57:08.497927       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 12:57:08.513689       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.160"]
	E0923 12:57:08.513839       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 12:57:08.553172       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 12:57:08.553258       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 12:57:08.553295       1 server_linux.go:169] "Using iptables Proxier"
	I0923 12:57:08.556859       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 12:57:08.557876       1 server.go:483] "Version info" version="v1.31.1"
	I0923 12:57:08.557939       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 12:57:08.564961       1 config.go:199] "Starting service config controller"
	I0923 12:57:08.565367       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 12:57:08.565715       1 config.go:328] "Starting node config controller"
	I0923 12:57:08.570600       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 12:57:08.566364       1 config.go:105] "Starting endpoint slice config controller"
	I0923 12:57:08.570712       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 12:57:08.570719       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 12:57:08.666413       1 shared_informer.go:320] Caches are synced for service config
	I0923 12:57:08.670755       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5c9e8fb5e944bc800446956248067c039e5c452de2651adf100841c5f062a431] <==
	W0923 12:57:00.057793       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 12:57:00.058398       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.080608       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 12:57:00.080826       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.112818       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 12:57:00.112990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.129261       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 12:57:00.129830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.181934       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 12:57:00.182022       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.183285       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 12:57:00.183358       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.190093       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 12:57:00.190177       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.223708       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 12:57:00.223794       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.255027       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 12:57:00.255136       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.582968       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 12:57:00.583073       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0923 12:57:02.534371       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0923 12:59:14.854178       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-vs524\": pod kube-proxy-vs524 is already assigned to node \"ha-097312-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-vs524" node="ha-097312-m03"
	E0923 12:59:14.854357       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 92738649-c52b-44d5-866b-8cda751a538c(kube-system/kube-proxy-vs524) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-vs524"
	E0923 12:59:14.854394       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-vs524\": pod kube-proxy-vs524 is already assigned to node \"ha-097312-m03\"" pod="kube-system/kube-proxy-vs524"
	I0923 12:59:14.854436       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-vs524" node="ha-097312-m03"
	
	
	==> kubelet <==
	Sep 23 13:02:02 ha-097312 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 13:02:02 ha-097312 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 13:02:02 ha-097312 kubelet[1304]: E0923 13:02:02.214007    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096522213607138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:02 ha-097312 kubelet[1304]: E0923 13:02:02.214059    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096522213607138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:12 ha-097312 kubelet[1304]: E0923 13:02:12.219070    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096532215431820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:12 ha-097312 kubelet[1304]: E0923 13:02:12.219206    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096532215431820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:22 ha-097312 kubelet[1304]: E0923 13:02:22.225821    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096542223481825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:22 ha-097312 kubelet[1304]: E0923 13:02:22.230227    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096542223481825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:32 ha-097312 kubelet[1304]: E0923 13:02:32.232689    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096552232228787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:32 ha-097312 kubelet[1304]: E0923 13:02:32.233031    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096552232228787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:42 ha-097312 kubelet[1304]: E0923 13:02:42.235021    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096562234565302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:42 ha-097312 kubelet[1304]: E0923 13:02:42.235083    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096562234565302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:52 ha-097312 kubelet[1304]: E0923 13:02:52.237647    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096572237152536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:52 ha-097312 kubelet[1304]: E0923 13:02:52.237938    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096572237152536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:03:02 ha-097312 kubelet[1304]: E0923 13:03:02.165544    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 13:03:02 ha-097312 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 13:03:02 ha-097312 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 13:03:02 ha-097312 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 13:03:02 ha-097312 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 13:03:02 ha-097312 kubelet[1304]: E0923 13:03:02.240514    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096582240150204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:03:02 ha-097312 kubelet[1304]: E0923 13:03:02.240606    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096582240150204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:03:12 ha-097312 kubelet[1304]: E0923 13:03:12.243234    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096592242789885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:03:12 ha-097312 kubelet[1304]: E0923 13:03:12.243281    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096592242789885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:03:22 ha-097312 kubelet[1304]: E0923 13:03:22.245580    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096602245012698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:03:22 ha-097312 kubelet[1304]: E0923 13:03:22.246002    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096602245012698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-097312 -n ha-097312
helpers_test.go:261: (dbg) Run:  kubectl --context ha-097312 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.425810452s)
ha_test.go:413: expected profile "ha-097312" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-097312\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-097312\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-097312\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.160\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.214\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.174\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.20\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\
"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\
":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-097312 -n ha-097312
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-097312 logs -n 25: (1.403864298s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-097312 cp ha-097312-m03:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3809348295/001/cp-test_ha-097312-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m03:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312:/home/docker/cp-test_ha-097312-m03_ha-097312.txt                       |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n ha-097312 sudo cat                                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m03_ha-097312.txt                                 |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m03:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m02:/home/docker/cp-test_ha-097312-m03_ha-097312-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n ha-097312-m02 sudo cat                                          | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m03_ha-097312-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m03:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04:/home/docker/cp-test_ha-097312-m03_ha-097312-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n ha-097312-m04 sudo cat                                          | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m03_ha-097312-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-097312 cp testdata/cp-test.txt                                                | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m04:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3809348295/001/cp-test_ha-097312-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m04:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312:/home/docker/cp-test_ha-097312-m04_ha-097312.txt                       |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n ha-097312 sudo cat                                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m04_ha-097312.txt                                 |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m04:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m02:/home/docker/cp-test_ha-097312-m04_ha-097312-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n ha-097312-m02 sudo cat                                          | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m04_ha-097312-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m04:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m03:/home/docker/cp-test_ha-097312-m04_ha-097312-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n ha-097312-m03 sudo cat                                          | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m04_ha-097312-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-097312 node stop m02 -v=7                                                     | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 12:56:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 12:56:21.828511  682373 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:56:21.828805  682373 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:56:21.828814  682373 out.go:358] Setting ErrFile to fd 2...
	I0923 12:56:21.828819  682373 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:56:21.829029  682373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-662205/.minikube/bin
	I0923 12:56:21.829675  682373 out.go:352] Setting JSON to false
	I0923 12:56:21.830688  682373 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9525,"bootTime":1727086657,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 12:56:21.830795  682373 start.go:139] virtualization: kvm guest
	I0923 12:56:21.833290  682373 out.go:177] * [ha-097312] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 12:56:21.834872  682373 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 12:56:21.834925  682373 notify.go:220] Checking for updates...
	I0923 12:56:21.837758  682373 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:56:21.839025  682373 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 12:56:21.840177  682373 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:56:21.841224  682373 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 12:56:21.842534  682373 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 12:56:21.843976  682373 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:56:21.880376  682373 out.go:177] * Using the kvm2 driver based on user configuration
	I0923 12:56:21.881602  682373 start.go:297] selected driver: kvm2
	I0923 12:56:21.881616  682373 start.go:901] validating driver "kvm2" against <nil>
	I0923 12:56:21.881629  682373 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 12:56:21.882531  682373 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:56:21.882644  682373 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19690-662205/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 12:56:21.899127  682373 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 12:56:21.899181  682373 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 12:56:21.899449  682373 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:56:21.899480  682373 cni.go:84] Creating CNI manager for ""
	I0923 12:56:21.899527  682373 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0923 12:56:21.899535  682373 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 12:56:21.899626  682373 start.go:340] cluster config:
	{Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-097312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0923 12:56:21.899742  682373 iso.go:125] acquiring lock: {Name:mkb968a95eae3838cd5c328cf3385c2ef4ff2c8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:56:21.901896  682373 out.go:177] * Starting "ha-097312" primary control-plane node in "ha-097312" cluster
	I0923 12:56:21.903202  682373 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 12:56:21.903247  682373 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 12:56:21.903256  682373 cache.go:56] Caching tarball of preloaded images
	I0923 12:56:21.903357  682373 preload.go:172] Found /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 12:56:21.903371  682373 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 12:56:21.903879  682373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 12:56:21.903923  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json: {Name:mkf732f530eb47d72142f084d9eb3cd0edcde9eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:56:21.904117  682373 start.go:360] acquireMachinesLock for ha-097312: {Name:mka98570d4b4becad22300323f1f88e64743eec3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 12:56:21.904165  682373 start.go:364] duration metric: took 29.656µs to acquireMachinesLock for "ha-097312"
	I0923 12:56:21.904184  682373 start.go:93] Provisioning new machine with config: &{Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-097312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:56:21.904282  682373 start.go:125] createHost starting for "" (driver="kvm2")
	I0923 12:56:21.905963  682373 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 12:56:21.906128  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:56:21.906175  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:56:21.921537  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41699
	I0923 12:56:21.922061  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:56:21.922650  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:56:21.922667  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:56:21.923007  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:56:21.923179  682373 main.go:141] libmachine: (ha-097312) Calling .GetMachineName
	I0923 12:56:21.923321  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:56:21.923466  682373 start.go:159] libmachine.API.Create for "ha-097312" (driver="kvm2")
	I0923 12:56:21.923507  682373 client.go:168] LocalClient.Create starting
	I0923 12:56:21.923545  682373 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem
	I0923 12:56:21.923585  682373 main.go:141] libmachine: Decoding PEM data...
	I0923 12:56:21.923623  682373 main.go:141] libmachine: Parsing certificate...
	I0923 12:56:21.923700  682373 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem
	I0923 12:56:21.923738  682373 main.go:141] libmachine: Decoding PEM data...
	I0923 12:56:21.923763  682373 main.go:141] libmachine: Parsing certificate...
	I0923 12:56:21.923785  682373 main.go:141] libmachine: Running pre-create checks...
	I0923 12:56:21.923796  682373 main.go:141] libmachine: (ha-097312) Calling .PreCreateCheck
	I0923 12:56:21.924185  682373 main.go:141] libmachine: (ha-097312) Calling .GetConfigRaw
	I0923 12:56:21.924615  682373 main.go:141] libmachine: Creating machine...
	I0923 12:56:21.924630  682373 main.go:141] libmachine: (ha-097312) Calling .Create
	I0923 12:56:21.924800  682373 main.go:141] libmachine: (ha-097312) Creating KVM machine...
	I0923 12:56:21.926163  682373 main.go:141] libmachine: (ha-097312) DBG | found existing default KVM network
	I0923 12:56:21.926884  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:21.926751  682396 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111f0}
	I0923 12:56:21.926933  682373 main.go:141] libmachine: (ha-097312) DBG | created network xml: 
	I0923 12:56:21.926948  682373 main.go:141] libmachine: (ha-097312) DBG | <network>
	I0923 12:56:21.926958  682373 main.go:141] libmachine: (ha-097312) DBG |   <name>mk-ha-097312</name>
	I0923 12:56:21.926973  682373 main.go:141] libmachine: (ha-097312) DBG |   <dns enable='no'/>
	I0923 12:56:21.926984  682373 main.go:141] libmachine: (ha-097312) DBG |   
	I0923 12:56:21.926995  682373 main.go:141] libmachine: (ha-097312) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0923 12:56:21.927005  682373 main.go:141] libmachine: (ha-097312) DBG |     <dhcp>
	I0923 12:56:21.927010  682373 main.go:141] libmachine: (ha-097312) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0923 12:56:21.927018  682373 main.go:141] libmachine: (ha-097312) DBG |     </dhcp>
	I0923 12:56:21.927023  682373 main.go:141] libmachine: (ha-097312) DBG |   </ip>
	I0923 12:56:21.927028  682373 main.go:141] libmachine: (ha-097312) DBG |   
	I0923 12:56:21.927037  682373 main.go:141] libmachine: (ha-097312) DBG | </network>
	I0923 12:56:21.927049  682373 main.go:141] libmachine: (ha-097312) DBG | 
	I0923 12:56:21.932476  682373 main.go:141] libmachine: (ha-097312) DBG | trying to create private KVM network mk-ha-097312 192.168.39.0/24...
	I0923 12:56:22.007044  682373 main.go:141] libmachine: (ha-097312) DBG | private KVM network mk-ha-097312 192.168.39.0/24 created
	I0923 12:56:22.007081  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:22.007015  682396 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:56:22.007094  682373 main.go:141] libmachine: (ha-097312) Setting up store path in /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312 ...
	I0923 12:56:22.007109  682373 main.go:141] libmachine: (ha-097312) Building disk image from file:///home/jenkins/minikube-integration/19690-662205/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 12:56:22.007154  682373 main.go:141] libmachine: (ha-097312) Downloading /home/jenkins/minikube-integration/19690-662205/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19690-662205/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 12:56:22.288956  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:22.288821  682396 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa...
	I0923 12:56:22.447093  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:22.446935  682396 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/ha-097312.rawdisk...
	I0923 12:56:22.447150  682373 main.go:141] libmachine: (ha-097312) DBG | Writing magic tar header
	I0923 12:56:22.447245  682373 main.go:141] libmachine: (ha-097312) DBG | Writing SSH key tar header
	I0923 12:56:22.447298  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:22.447079  682396 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312 ...
	I0923 12:56:22.447319  682373 main.go:141] libmachine: (ha-097312) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312 (perms=drwx------)
	I0923 12:56:22.447334  682373 main.go:141] libmachine: (ha-097312) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube/machines (perms=drwxr-xr-x)
	I0923 12:56:22.447344  682373 main.go:141] libmachine: (ha-097312) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube (perms=drwxr-xr-x)
	I0923 12:56:22.447360  682373 main.go:141] libmachine: (ha-097312) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205 (perms=drwxrwxr-x)
	I0923 12:56:22.447372  682373 main.go:141] libmachine: (ha-097312) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 12:56:22.447381  682373 main.go:141] libmachine: (ha-097312) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312
	I0923 12:56:22.447394  682373 main.go:141] libmachine: (ha-097312) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 12:56:22.447407  682373 main.go:141] libmachine: (ha-097312) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube/machines
	I0923 12:56:22.447421  682373 main.go:141] libmachine: (ha-097312) Creating domain...
	I0923 12:56:22.447439  682373 main.go:141] libmachine: (ha-097312) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:56:22.447455  682373 main.go:141] libmachine: (ha-097312) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205
	I0923 12:56:22.447468  682373 main.go:141] libmachine: (ha-097312) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 12:56:22.447479  682373 main.go:141] libmachine: (ha-097312) DBG | Checking permissions on dir: /home/jenkins
	I0923 12:56:22.447492  682373 main.go:141] libmachine: (ha-097312) DBG | Checking permissions on dir: /home
	I0923 12:56:22.447500  682373 main.go:141] libmachine: (ha-097312) DBG | Skipping /home - not owner
	I0923 12:56:22.448456  682373 main.go:141] libmachine: (ha-097312) define libvirt domain using xml: 
	I0923 12:56:22.448482  682373 main.go:141] libmachine: (ha-097312) <domain type='kvm'>
	I0923 12:56:22.448488  682373 main.go:141] libmachine: (ha-097312)   <name>ha-097312</name>
	I0923 12:56:22.448493  682373 main.go:141] libmachine: (ha-097312)   <memory unit='MiB'>2200</memory>
	I0923 12:56:22.448498  682373 main.go:141] libmachine: (ha-097312)   <vcpu>2</vcpu>
	I0923 12:56:22.448502  682373 main.go:141] libmachine: (ha-097312)   <features>
	I0923 12:56:22.448506  682373 main.go:141] libmachine: (ha-097312)     <acpi/>
	I0923 12:56:22.448510  682373 main.go:141] libmachine: (ha-097312)     <apic/>
	I0923 12:56:22.448514  682373 main.go:141] libmachine: (ha-097312)     <pae/>
	I0923 12:56:22.448526  682373 main.go:141] libmachine: (ha-097312)     
	I0923 12:56:22.448561  682373 main.go:141] libmachine: (ha-097312)   </features>
	I0923 12:56:22.448583  682373 main.go:141] libmachine: (ha-097312)   <cpu mode='host-passthrough'>
	I0923 12:56:22.448588  682373 main.go:141] libmachine: (ha-097312)   
	I0923 12:56:22.448594  682373 main.go:141] libmachine: (ha-097312)   </cpu>
	I0923 12:56:22.448600  682373 main.go:141] libmachine: (ha-097312)   <os>
	I0923 12:56:22.448607  682373 main.go:141] libmachine: (ha-097312)     <type>hvm</type>
	I0923 12:56:22.448634  682373 main.go:141] libmachine: (ha-097312)     <boot dev='cdrom'/>
	I0923 12:56:22.448653  682373 main.go:141] libmachine: (ha-097312)     <boot dev='hd'/>
	I0923 12:56:22.448665  682373 main.go:141] libmachine: (ha-097312)     <bootmenu enable='no'/>
	I0923 12:56:22.448674  682373 main.go:141] libmachine: (ha-097312)   </os>
	I0923 12:56:22.448693  682373 main.go:141] libmachine: (ha-097312)   <devices>
	I0923 12:56:22.448701  682373 main.go:141] libmachine: (ha-097312)     <disk type='file' device='cdrom'>
	I0923 12:56:22.448711  682373 main.go:141] libmachine: (ha-097312)       <source file='/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/boot2docker.iso'/>
	I0923 12:56:22.448722  682373 main.go:141] libmachine: (ha-097312)       <target dev='hdc' bus='scsi'/>
	I0923 12:56:22.448735  682373 main.go:141] libmachine: (ha-097312)       <readonly/>
	I0923 12:56:22.448746  682373 main.go:141] libmachine: (ha-097312)     </disk>
	I0923 12:56:22.448754  682373 main.go:141] libmachine: (ha-097312)     <disk type='file' device='disk'>
	I0923 12:56:22.448761  682373 main.go:141] libmachine: (ha-097312)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 12:56:22.448771  682373 main.go:141] libmachine: (ha-097312)       <source file='/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/ha-097312.rawdisk'/>
	I0923 12:56:22.448779  682373 main.go:141] libmachine: (ha-097312)       <target dev='hda' bus='virtio'/>
	I0923 12:56:22.448783  682373 main.go:141] libmachine: (ha-097312)     </disk>
	I0923 12:56:22.448790  682373 main.go:141] libmachine: (ha-097312)     <interface type='network'>
	I0923 12:56:22.448799  682373 main.go:141] libmachine: (ha-097312)       <source network='mk-ha-097312'/>
	I0923 12:56:22.448805  682373 main.go:141] libmachine: (ha-097312)       <model type='virtio'/>
	I0923 12:56:22.448810  682373 main.go:141] libmachine: (ha-097312)     </interface>
	I0923 12:56:22.448820  682373 main.go:141] libmachine: (ha-097312)     <interface type='network'>
	I0923 12:56:22.448833  682373 main.go:141] libmachine: (ha-097312)       <source network='default'/>
	I0923 12:56:22.448840  682373 main.go:141] libmachine: (ha-097312)       <model type='virtio'/>
	I0923 12:56:22.448845  682373 main.go:141] libmachine: (ha-097312)     </interface>
	I0923 12:56:22.448855  682373 main.go:141] libmachine: (ha-097312)     <serial type='pty'>
	I0923 12:56:22.448860  682373 main.go:141] libmachine: (ha-097312)       <target port='0'/>
	I0923 12:56:22.448869  682373 main.go:141] libmachine: (ha-097312)     </serial>
	I0923 12:56:22.448875  682373 main.go:141] libmachine: (ha-097312)     <console type='pty'>
	I0923 12:56:22.448885  682373 main.go:141] libmachine: (ha-097312)       <target type='serial' port='0'/>
	I0923 12:56:22.448897  682373 main.go:141] libmachine: (ha-097312)     </console>
	I0923 12:56:22.448912  682373 main.go:141] libmachine: (ha-097312)     <rng model='virtio'>
	I0923 12:56:22.448925  682373 main.go:141] libmachine: (ha-097312)       <backend model='random'>/dev/random</backend>
	I0923 12:56:22.448933  682373 main.go:141] libmachine: (ha-097312)     </rng>
	I0923 12:56:22.448940  682373 main.go:141] libmachine: (ha-097312)     
	I0923 12:56:22.448949  682373 main.go:141] libmachine: (ha-097312)     
	I0923 12:56:22.448957  682373 main.go:141] libmachine: (ha-097312)   </devices>
	I0923 12:56:22.448965  682373 main.go:141] libmachine: (ha-097312) </domain>
	I0923 12:56:22.448975  682373 main.go:141] libmachine: (ha-097312) 
	I0923 12:56:22.453510  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:86:5c:23 in network default
	I0923 12:56:22.454136  682373 main.go:141] libmachine: (ha-097312) Ensuring networks are active...
	I0923 12:56:22.454160  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:22.455025  682373 main.go:141] libmachine: (ha-097312) Ensuring network default is active
	I0923 12:56:22.455403  682373 main.go:141] libmachine: (ha-097312) Ensuring network mk-ha-097312 is active
	I0923 12:56:22.455910  682373 main.go:141] libmachine: (ha-097312) Getting domain xml...
	I0923 12:56:22.456804  682373 main.go:141] libmachine: (ha-097312) Creating domain...
	I0923 12:56:23.684285  682373 main.go:141] libmachine: (ha-097312) Waiting to get IP...
	I0923 12:56:23.685050  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:23.685483  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:23.685549  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:23.685457  682396 retry.go:31] will retry after 284.819092ms: waiting for machine to come up
	I0923 12:56:23.972224  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:23.972712  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:23.972742  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:23.972658  682396 retry.go:31] will retry after 296.568661ms: waiting for machine to come up
	I0923 12:56:24.271431  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:24.271859  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:24.271878  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:24.271837  682396 retry.go:31] will retry after 305.883088ms: waiting for machine to come up
	I0923 12:56:24.579449  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:24.579888  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:24.579915  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:24.579844  682396 retry.go:31] will retry after 417.526062ms: waiting for machine to come up
	I0923 12:56:24.999494  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:24.999869  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:24.999897  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:24.999819  682396 retry.go:31] will retry after 647.110055ms: waiting for machine to come up
	I0923 12:56:25.648547  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:25.649112  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:25.649144  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:25.649045  682396 retry.go:31] will retry after 699.974926ms: waiting for machine to come up
	I0923 12:56:26.350970  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:26.351427  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:26.351457  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:26.351401  682396 retry.go:31] will retry after 822.151225ms: waiting for machine to come up
	I0923 12:56:27.175278  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:27.175659  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:27.175688  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:27.175617  682396 retry.go:31] will retry after 1.471324905s: waiting for machine to come up
	I0923 12:56:28.649431  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:28.649912  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:28.649939  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:28.649865  682396 retry.go:31] will retry after 1.835415418s: waiting for machine to come up
	I0923 12:56:30.487327  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:30.487788  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:30.487842  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:30.487762  682396 retry.go:31] will retry after 1.452554512s: waiting for machine to come up
	I0923 12:56:31.941929  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:31.942466  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:31.942496  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:31.942406  682396 retry.go:31] will retry after 2.833337463s: waiting for machine to come up
	I0923 12:56:34.777034  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:34.777417  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:34.777435  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:34.777385  682396 retry.go:31] will retry after 2.506824406s: waiting for machine to come up
	I0923 12:56:37.285508  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:37.285975  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:37.286004  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:37.285923  682396 retry.go:31] will retry after 2.872661862s: waiting for machine to come up
	I0923 12:56:40.162076  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:40.162525  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:40.162542  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:40.162478  682396 retry.go:31] will retry after 3.815832653s: waiting for machine to come up
	I0923 12:56:43.980644  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:43.981295  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has current primary IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:43.981341  682373 main.go:141] libmachine: (ha-097312) Found IP for machine: 192.168.39.160
	I0923 12:56:43.981355  682373 main.go:141] libmachine: (ha-097312) Reserving static IP address...
	I0923 12:56:43.981713  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find host DHCP lease matching {name: "ha-097312", mac: "52:54:00:06:7f:c5", ip: "192.168.39.160"} in network mk-ha-097312
	I0923 12:56:44.063688  682373 main.go:141] libmachine: (ha-097312) DBG | Getting to WaitForSSH function...
	I0923 12:56:44.063720  682373 main.go:141] libmachine: (ha-097312) Reserved static IP address: 192.168.39.160
	I0923 12:56:44.063760  682373 main.go:141] libmachine: (ha-097312) Waiting for SSH to be available...
	I0923 12:56:44.066589  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.067094  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:minikube Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:44.067121  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.067273  682373 main.go:141] libmachine: (ha-097312) DBG | Using SSH client type: external
	I0923 12:56:44.067298  682373 main.go:141] libmachine: (ha-097312) DBG | Using SSH private key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa (-rw-------)
	I0923 12:56:44.067335  682373 main.go:141] libmachine: (ha-097312) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.160 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 12:56:44.067346  682373 main.go:141] libmachine: (ha-097312) DBG | About to run SSH command:
	I0923 12:56:44.067388  682373 main.go:141] libmachine: (ha-097312) DBG | exit 0
	I0923 12:56:44.194221  682373 main.go:141] libmachine: (ha-097312) DBG | SSH cmd err, output: <nil>: 
	I0923 12:56:44.194546  682373 main.go:141] libmachine: (ha-097312) KVM machine creation complete!
	I0923 12:56:44.194794  682373 main.go:141] libmachine: (ha-097312) Calling .GetConfigRaw
	I0923 12:56:44.195383  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:56:44.195600  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:56:44.195740  682373 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 12:56:44.195754  682373 main.go:141] libmachine: (ha-097312) Calling .GetState
	I0923 12:56:44.197002  682373 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 12:56:44.197015  682373 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 12:56:44.197021  682373 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 12:56:44.197025  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:44.200085  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.200458  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:44.200480  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.200781  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:44.201011  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:44.201209  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:44.201346  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:44.201528  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:56:44.201732  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 12:56:44.201744  682373 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 12:56:44.309556  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:56:44.309581  682373 main.go:141] libmachine: Detecting the provisioner...
	I0923 12:56:44.309589  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:44.312757  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.313154  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:44.313202  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.313393  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:44.313633  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:44.313899  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:44.314086  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:44.314302  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:56:44.314501  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 12:56:44.314513  682373 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 12:56:44.422704  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 12:56:44.422779  682373 main.go:141] libmachine: found compatible host: buildroot
	I0923 12:56:44.422786  682373 main.go:141] libmachine: Provisioning with buildroot...
	I0923 12:56:44.422796  682373 main.go:141] libmachine: (ha-097312) Calling .GetMachineName
	I0923 12:56:44.423069  682373 buildroot.go:166] provisioning hostname "ha-097312"
	I0923 12:56:44.423101  682373 main.go:141] libmachine: (ha-097312) Calling .GetMachineName
	I0923 12:56:44.423298  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:44.426419  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.426747  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:44.426769  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.426988  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:44.427186  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:44.427341  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:44.427471  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:44.427647  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:56:44.427840  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 12:56:44.427852  682373 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-097312 && echo "ha-097312" | sudo tee /etc/hostname
	I0923 12:56:44.548083  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-097312
	
	I0923 12:56:44.548119  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:44.550930  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.551237  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:44.551281  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.551446  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:44.551667  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:44.551843  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:44.551987  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:44.552153  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:56:44.552393  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 12:56:44.552421  682373 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-097312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-097312/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-097312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:56:44.667004  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:56:44.667043  682373 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19690-662205/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-662205/.minikube}
	I0923 12:56:44.667068  682373 buildroot.go:174] setting up certificates
	I0923 12:56:44.667085  682373 provision.go:84] configureAuth start
	I0923 12:56:44.667098  682373 main.go:141] libmachine: (ha-097312) Calling .GetMachineName
	I0923 12:56:44.667438  682373 main.go:141] libmachine: (ha-097312) Calling .GetIP
	I0923 12:56:44.670311  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.670792  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:44.670845  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.670910  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:44.673549  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.673871  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:44.673897  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.674038  682373 provision.go:143] copyHostCerts
	I0923 12:56:44.674077  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 12:56:44.674137  682373 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem, removing ...
	I0923 12:56:44.674159  682373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 12:56:44.674245  682373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem (1123 bytes)
	I0923 12:56:44.674380  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 12:56:44.674409  682373 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem, removing ...
	I0923 12:56:44.674417  682373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 12:56:44.674460  682373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem (1675 bytes)
	I0923 12:56:44.674580  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 12:56:44.674634  682373 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem, removing ...
	I0923 12:56:44.674642  682373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 12:56:44.674698  682373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem (1082 bytes)
	I0923 12:56:44.674832  682373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem org=jenkins.ha-097312 san=[127.0.0.1 192.168.39.160 ha-097312 localhost minikube]
	I0923 12:56:44.904863  682373 provision.go:177] copyRemoteCerts
	I0923 12:56:44.904957  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:56:44.904984  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:44.908150  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.908582  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:44.908619  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.908884  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:44.909135  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:44.909342  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:44.909527  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:56:44.992087  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 12:56:44.992199  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0923 12:56:45.016139  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 12:56:45.016229  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 12:56:45.039856  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 12:56:45.040045  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 12:56:45.063092  682373 provision.go:87] duration metric: took 395.980147ms to configureAuth
	I0923 12:56:45.063127  682373 buildroot.go:189] setting minikube options for container-runtime
	I0923 12:56:45.063302  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:56:45.063398  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:45.066695  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.067038  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:45.067071  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.067240  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:45.067488  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:45.067676  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:45.067817  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:45.068046  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:56:45.068308  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 12:56:45.068326  682373 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 12:56:45.283348  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 12:56:45.283372  682373 main.go:141] libmachine: Checking connection to Docker...
	I0923 12:56:45.283380  682373 main.go:141] libmachine: (ha-097312) Calling .GetURL
	I0923 12:56:45.284754  682373 main.go:141] libmachine: (ha-097312) DBG | Using libvirt version 6000000
	I0923 12:56:45.287147  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.287577  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:45.287606  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.287745  682373 main.go:141] libmachine: Docker is up and running!
	I0923 12:56:45.287766  682373 main.go:141] libmachine: Reticulating splines...
	I0923 12:56:45.287773  682373 client.go:171] duration metric: took 23.364255409s to LocalClient.Create
	I0923 12:56:45.287797  682373 start.go:167] duration metric: took 23.364332593s to libmachine.API.Create "ha-097312"
	I0923 12:56:45.287811  682373 start.go:293] postStartSetup for "ha-097312" (driver="kvm2")
	I0923 12:56:45.287824  682373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:56:45.287841  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:56:45.288125  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:56:45.288161  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:45.290362  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.290827  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:45.290857  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.291024  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:45.291233  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:45.291406  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:45.291630  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:56:45.376057  682373 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:56:45.380314  682373 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 12:56:45.380346  682373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/addons for local assets ...
	I0923 12:56:45.380412  682373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/files for local assets ...
	I0923 12:56:45.380483  682373 filesync.go:149] local asset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> 6694472.pem in /etc/ssl/certs
	I0923 12:56:45.380492  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> /etc/ssl/certs/6694472.pem
	I0923 12:56:45.380593  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 12:56:45.390109  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 12:56:45.414414  682373 start.go:296] duration metric: took 126.585208ms for postStartSetup
	I0923 12:56:45.414519  682373 main.go:141] libmachine: (ha-097312) Calling .GetConfigRaw
	I0923 12:56:45.415223  682373 main.go:141] libmachine: (ha-097312) Calling .GetIP
	I0923 12:56:45.418035  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.418499  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:45.418535  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.418757  682373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 12:56:45.418971  682373 start.go:128] duration metric: took 23.514676713s to createHost
	I0923 12:56:45.419008  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:45.421290  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.421582  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:45.421607  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.421739  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:45.421993  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:45.422231  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:45.422397  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:45.422624  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:56:45.422888  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 12:56:45.422913  682373 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 12:56:45.530668  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727096205.504964904
	
	I0923 12:56:45.530696  682373 fix.go:216] guest clock: 1727096205.504964904
	I0923 12:56:45.530705  682373 fix.go:229] Guest: 2024-09-23 12:56:45.504964904 +0000 UTC Remote: 2024-09-23 12:56:45.41898604 +0000 UTC m=+23.627481107 (delta=85.978864ms)
	I0923 12:56:45.530768  682373 fix.go:200] guest clock delta is within tolerance: 85.978864ms
	I0923 12:56:45.530777  682373 start.go:83] releasing machines lock for "ha-097312", held for 23.626602839s
	I0923 12:56:45.530803  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:56:45.531129  682373 main.go:141] libmachine: (ha-097312) Calling .GetIP
	I0923 12:56:45.533942  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.534282  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:45.534313  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.534510  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:56:45.535018  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:56:45.535175  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:56:45.535268  682373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 12:56:45.535329  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:45.535407  682373 ssh_runner.go:195] Run: cat /version.json
	I0923 12:56:45.535432  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:45.538344  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.538693  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:45.538718  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.538736  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.538916  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:45.539107  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:45.539142  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:45.539168  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.539301  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:45.539401  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:45.539491  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:56:45.539522  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:45.539669  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:45.539871  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:56:45.615078  682373 ssh_runner.go:195] Run: systemctl --version
	I0923 12:56:45.652339  682373 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 12:56:45.814596  682373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 12:56:45.820480  682373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 12:56:45.820567  682373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 12:56:45.837076  682373 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 12:56:45.837109  682373 start.go:495] detecting cgroup driver to use...
	I0923 12:56:45.837204  682373 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 12:56:45.852886  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:56:45.867319  682373 docker.go:217] disabling cri-docker service (if available) ...
	I0923 12:56:45.867387  682373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 12:56:45.881106  682373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 12:56:45.895047  682373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 12:56:46.010122  682373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 12:56:46.160036  682373 docker.go:233] disabling docker service ...
	I0923 12:56:46.160166  682373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 12:56:46.174281  682373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 12:56:46.187289  682373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 12:56:46.315823  682373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 12:56:46.451742  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 12:56:46.465159  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:56:46.485490  682373 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 12:56:46.485567  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:56:46.496172  682373 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 12:56:46.496276  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:56:46.506865  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:56:46.517182  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:56:46.527559  682373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 12:56:46.538362  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:56:46.548742  682373 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:56:46.565850  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:56:46.576416  682373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 12:56:46.586314  682373 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 12:56:46.586391  682373 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 12:56:46.600960  682373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 12:56:46.613686  682373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:56:46.747213  682373 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 12:56:46.833362  682373 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 12:56:46.833455  682373 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 12:56:46.838407  682373 start.go:563] Will wait 60s for crictl version
	I0923 12:56:46.838481  682373 ssh_runner.go:195] Run: which crictl
	I0923 12:56:46.842254  682373 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 12:56:46.881238  682373 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 12:56:46.881313  682373 ssh_runner.go:195] Run: crio --version
	I0923 12:56:46.910755  682373 ssh_runner.go:195] Run: crio --version
	I0923 12:56:46.941180  682373 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 12:56:46.942573  682373 main.go:141] libmachine: (ha-097312) Calling .GetIP
	I0923 12:56:46.945291  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:46.945654  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:46.945683  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:46.945901  682373 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 12:56:46.950351  682373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:56:46.963572  682373 kubeadm.go:883] updating cluster {Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 12:56:46.963689  682373 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 12:56:46.963752  682373 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 12:56:46.995863  682373 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0923 12:56:46.995949  682373 ssh_runner.go:195] Run: which lz4
	I0923 12:56:47.000077  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0923 12:56:47.000199  682373 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 12:56:47.004245  682373 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 12:56:47.004290  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0923 12:56:48.233778  682373 crio.go:462] duration metric: took 1.233615545s to copy over tarball
	I0923 12:56:48.233872  682373 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 12:56:50.293806  682373 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.059892855s)
	I0923 12:56:50.293864  682373 crio.go:469] duration metric: took 2.060053222s to extract the tarball
	I0923 12:56:50.293875  682373 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 12:56:50.330288  682373 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 12:56:50.382422  682373 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 12:56:50.382453  682373 cache_images.go:84] Images are preloaded, skipping loading
	I0923 12:56:50.382463  682373 kubeadm.go:934] updating node { 192.168.39.160 8443 v1.31.1 crio true true} ...
	I0923 12:56:50.382618  682373 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-097312 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 12:56:50.382706  682373 ssh_runner.go:195] Run: crio config
	I0923 12:56:50.429046  682373 cni.go:84] Creating CNI manager for ""
	I0923 12:56:50.429071  682373 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 12:56:50.429081  682373 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 12:56:50.429114  682373 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.160 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-097312 NodeName:ha-097312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.160"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.160 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 12:56:50.429251  682373 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.160
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-097312"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.160
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.160"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 12:56:50.429291  682373 kube-vip.go:115] generating kube-vip config ...
	I0923 12:56:50.429336  682373 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 12:56:50.447284  682373 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 12:56:50.447397  682373 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0923 12:56:50.447453  682373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 12:56:50.457555  682373 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 12:56:50.457631  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0923 12:56:50.467361  682373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0923 12:56:50.484221  682373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 12:56:50.501136  682373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0923 12:56:50.517771  682373 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0923 12:56:50.535030  682373 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0923 12:56:50.538926  682373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:56:50.550841  682373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:56:50.685055  682373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:56:50.702466  682373 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312 for IP: 192.168.39.160
	I0923 12:56:50.702500  682373 certs.go:194] generating shared ca certs ...
	I0923 12:56:50.702525  682373 certs.go:226] acquiring lock for ca certs: {Name:mk5f47b34d40554f07f6507fea971236e4735d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:56:50.702732  682373 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key
	I0923 12:56:50.702796  682373 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key
	I0923 12:56:50.702811  682373 certs.go:256] generating profile certs ...
	I0923 12:56:50.702903  682373 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.key
	I0923 12:56:50.702928  682373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.crt with IP's: []
	I0923 12:56:50.839973  682373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.crt ...
	I0923 12:56:50.840005  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.crt: {Name:mk3ec295cf75d5f37a812267f291d008d2d41849 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:56:50.840201  682373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.key ...
	I0923 12:56:50.840215  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.key: {Name:mk2a9a6301a953bccf7179cf3fcd9c6c49523a28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:56:50.840321  682373 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.3e258ae9
	I0923 12:56:50.840339  682373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.3e258ae9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.160 192.168.39.254]
	I0923 12:56:50.957561  682373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.3e258ae9 ...
	I0923 12:56:50.957598  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.3e258ae9: {Name:mke07e7dcb821169b2edcdcfe37c1283edab6d93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:56:50.957795  682373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.3e258ae9 ...
	I0923 12:56:50.957814  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.3e258ae9: {Name:mk473437de8fd0279ccc88430a74364f16849fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:56:50.957935  682373 certs.go:381] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.3e258ae9 -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt
	I0923 12:56:50.958016  682373 certs.go:385] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.3e258ae9 -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key
	I0923 12:56:50.958070  682373 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key
	I0923 12:56:50.958086  682373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt with IP's: []
	I0923 12:56:51.039985  682373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt ...
	I0923 12:56:51.040029  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt: {Name:mk08fe599b3bb9f9eafe363d4dcfa2dc4583d108 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:56:51.040291  682373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key ...
	I0923 12:56:51.040316  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key: {Name:mke55afec0b5332166375bf6241593073b8f40da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:56:51.040432  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 12:56:51.040459  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 12:56:51.040472  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 12:56:51.040484  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 12:56:51.040497  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 12:56:51.040509  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 12:56:51.040524  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 12:56:51.040539  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 12:56:51.040619  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem (1338 bytes)
	W0923 12:56:51.040660  682373 certs.go:480] ignoring /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447_empty.pem, impossibly tiny 0 bytes
	I0923 12:56:51.040672  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 12:56:51.040698  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem (1082 bytes)
	I0923 12:56:51.040726  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem (1123 bytes)
	I0923 12:56:51.040750  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem (1675 bytes)
	I0923 12:56:51.040798  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 12:56:51.040830  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem -> /usr/share/ca-certificates/669447.pem
	I0923 12:56:51.040846  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> /usr/share/ca-certificates/6694472.pem
	I0923 12:56:51.040863  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:56:51.041476  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 12:56:51.067263  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 12:56:51.091814  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 12:56:51.115009  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 12:56:51.138682  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0923 12:56:51.162647  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 12:56:51.186729  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 12:56:51.210155  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 12:56:51.233576  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem --> /usr/share/ca-certificates/669447.pem (1338 bytes)
	I0923 12:56:51.256633  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /usr/share/ca-certificates/6694472.pem (1708 bytes)
	I0923 12:56:51.279649  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 12:56:51.303438  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 12:56:51.320192  682373 ssh_runner.go:195] Run: openssl version
	I0923 12:56:51.326310  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6694472.pem && ln -fs /usr/share/ca-certificates/6694472.pem /etc/ssl/certs/6694472.pem"
	I0923 12:56:51.337813  682373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6694472.pem
	I0923 12:56:51.342410  682373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 12:47 /usr/share/ca-certificates/6694472.pem
	I0923 12:56:51.342469  682373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6694472.pem
	I0923 12:56:51.348141  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6694472.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 12:56:51.358951  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 12:56:51.369927  682373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:56:51.374498  682373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 12:28 /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:56:51.374569  682373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:56:51.380225  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 12:56:51.390788  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669447.pem && ln -fs /usr/share/ca-certificates/669447.pem /etc/ssl/certs/669447.pem"
	I0923 12:56:51.401357  682373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669447.pem
	I0923 12:56:51.405984  682373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 12:47 /usr/share/ca-certificates/669447.pem
	I0923 12:56:51.406065  682373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669447.pem
	I0923 12:56:51.411938  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/669447.pem /etc/ssl/certs/51391683.0"
	I0923 12:56:51.422798  682373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 12:56:51.426778  682373 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 12:56:51.426837  682373 kubeadm.go:392] StartCluster: {Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:56:51.426911  682373 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 12:56:51.426969  682373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 12:56:51.467074  682373 cri.go:89] found id: ""
	I0923 12:56:51.467159  682373 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 12:56:51.482686  682373 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 12:56:51.497867  682373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 12:56:51.512428  682373 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 12:56:51.512454  682373 kubeadm.go:157] found existing configuration files:
	
	I0923 12:56:51.512511  682373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 12:56:51.529985  682373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 12:56:51.530093  682373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 12:56:51.542142  682373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 12:56:51.550802  682373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 12:56:51.550892  682373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 12:56:51.560648  682373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 12:56:51.570247  682373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 12:56:51.570324  682373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 12:56:51.580148  682373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 12:56:51.589038  682373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 12:56:51.589128  682373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 12:56:51.598472  682373 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 12:56:51.709387  682373 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 12:56:51.709477  682373 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 12:56:51.804679  682373 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 12:56:51.804878  682373 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 12:56:51.805013  682373 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 12:56:51.813809  682373 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 12:56:51.816648  682373 out.go:235]   - Generating certificates and keys ...
	I0923 12:56:51.817490  682373 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 12:56:51.817573  682373 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 12:56:51.891229  682373 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 12:56:51.977862  682373 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 12:56:52.256371  682373 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 12:56:52.418600  682373 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 12:56:52.566134  682373 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 12:56:52.566417  682373 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-097312 localhost] and IPs [192.168.39.160 127.0.0.1 ::1]
	I0923 12:56:52.754339  682373 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 12:56:52.754631  682373 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-097312 localhost] and IPs [192.168.39.160 127.0.0.1 ::1]
	I0923 12:56:52.984244  682373 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 12:56:53.199395  682373 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 12:56:53.333105  682373 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 12:56:53.333280  682373 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 12:56:53.475215  682373 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 12:56:53.703024  682373 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 12:56:53.843337  682373 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 12:56:54.031020  682373 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 12:56:54.307973  682373 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 12:56:54.308522  682373 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 12:56:54.312025  682373 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 12:56:54.415301  682373 out.go:235]   - Booting up control plane ...
	I0923 12:56:54.415467  682373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 12:56:54.415596  682373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 12:56:54.415675  682373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 12:56:54.415768  682373 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 12:56:54.415870  682373 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 12:56:54.415955  682373 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 12:56:54.481155  682373 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 12:56:54.481329  682373 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 12:56:54.981948  682373 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.226424ms
	I0923 12:56:54.982063  682373 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 12:57:01.058259  682373 kubeadm.go:310] [api-check] The API server is healthy after 6.078664089s
	I0923 12:57:01.078738  682373 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 12:57:01.102575  682373 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 12:57:01.638520  682373 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 12:57:01.638793  682373 kubeadm.go:310] [mark-control-plane] Marking the node ha-097312 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 12:57:01.654796  682373 kubeadm.go:310] [bootstrap-token] Using token: tjz9o5.go3sw7ivocitep6z
	I0923 12:57:01.656792  682373 out.go:235]   - Configuring RBAC rules ...
	I0923 12:57:01.656993  682373 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 12:57:01.670875  682373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 12:57:01.681661  682373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 12:57:01.686098  682373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 12:57:01.693270  682373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 12:57:01.698752  682373 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 12:57:01.717473  682373 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 12:57:02.034772  682373 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 12:57:02.465304  682373 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 12:57:02.466345  682373 kubeadm.go:310] 
	I0923 12:57:02.466441  682373 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 12:57:02.466453  682373 kubeadm.go:310] 
	I0923 12:57:02.466593  682373 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 12:57:02.466605  682373 kubeadm.go:310] 
	I0923 12:57:02.466637  682373 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 12:57:02.466743  682373 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 12:57:02.466828  682373 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 12:57:02.466838  682373 kubeadm.go:310] 
	I0923 12:57:02.466914  682373 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 12:57:02.466921  682373 kubeadm.go:310] 
	I0923 12:57:02.466984  682373 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 12:57:02.466993  682373 kubeadm.go:310] 
	I0923 12:57:02.467078  682373 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 12:57:02.467176  682373 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 12:57:02.467278  682373 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 12:57:02.467287  682373 kubeadm.go:310] 
	I0923 12:57:02.467400  682373 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 12:57:02.467489  682373 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 12:57:02.467520  682373 kubeadm.go:310] 
	I0923 12:57:02.467645  682373 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tjz9o5.go3sw7ivocitep6z \
	I0923 12:57:02.467825  682373 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fc29dc81bde6bbaef9ddbc91342eaa216189e2d814cc53e215aada75bebb1ff \
	I0923 12:57:02.467866  682373 kubeadm.go:310] 	--control-plane 
	I0923 12:57:02.467876  682373 kubeadm.go:310] 
	I0923 12:57:02.468002  682373 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 12:57:02.468014  682373 kubeadm.go:310] 
	I0923 12:57:02.468111  682373 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tjz9o5.go3sw7ivocitep6z \
	I0923 12:57:02.468232  682373 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fc29dc81bde6bbaef9ddbc91342eaa216189e2d814cc53e215aada75bebb1ff 
	I0923 12:57:02.469853  682373 kubeadm.go:310] W0923 12:56:51.688284     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:57:02.470263  682373 kubeadm.go:310] W0923 12:56:51.689248     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:57:02.470417  682373 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 12:57:02.470437  682373 cni.go:84] Creating CNI manager for ""
	I0923 12:57:02.470446  682373 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 12:57:02.472858  682373 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0923 12:57:02.474323  682373 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0923 12:57:02.479759  682373 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0923 12:57:02.479789  682373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0923 12:57:02.504445  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0923 12:57:02.891714  682373 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 12:57:02.891813  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:57:02.891852  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-097312 minikube.k8s.io/updated_at=2024_09_23T12_57_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=ha-097312 minikube.k8s.io/primary=true
	I0923 12:57:03.052741  682373 ops.go:34] apiserver oom_adj: -16
	I0923 12:57:03.052880  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:57:03.553199  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:57:04.053904  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:57:04.553368  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:57:05.053003  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:57:05.553371  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:57:06.053924  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:57:06.553890  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:57:06.654158  682373 kubeadm.go:1113] duration metric: took 3.762424286s to wait for elevateKubeSystemPrivileges
	I0923 12:57:06.654208  682373 kubeadm.go:394] duration metric: took 15.227377014s to StartCluster
	I0923 12:57:06.654235  682373 settings.go:142] acquiring lock: {Name:mk3da09e51125fc906a9e1276ab490fc7b26b03f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:57:06.654340  682373 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 12:57:06.655289  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/kubeconfig: {Name:mk213d38080414fbe499f6509d2653fd99103348 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:57:06.655604  682373 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:57:06.655633  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 12:57:06.655653  682373 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 12:57:06.655642  682373 start.go:241] waiting for startup goroutines ...
	I0923 12:57:06.655745  682373 addons.go:69] Setting storage-provisioner=true in profile "ha-097312"
	I0923 12:57:06.655797  682373 addons.go:234] Setting addon storage-provisioner=true in "ha-097312"
	I0923 12:57:06.655834  682373 host.go:66] Checking if "ha-097312" exists ...
	I0923 12:57:06.655835  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:57:06.655752  682373 addons.go:69] Setting default-storageclass=true in profile "ha-097312"
	I0923 12:57:06.655926  682373 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-097312"
	I0923 12:57:06.656390  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:57:06.656400  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:57:06.656428  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:57:06.656430  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:57:06.672616  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43027
	I0923 12:57:06.672985  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44797
	I0923 12:57:06.673168  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:57:06.673414  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:57:06.673768  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:57:06.673789  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:57:06.673930  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:57:06.673964  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:57:06.674169  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:57:06.674315  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:57:06.674361  682373 main.go:141] libmachine: (ha-097312) Calling .GetState
	I0923 12:57:06.674868  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:57:06.674975  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:57:06.676732  682373 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 12:57:06.677135  682373 kapi.go:59] client config for ha-097312: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.crt", KeyFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.key", CAFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 12:57:06.677778  682373 cert_rotation.go:140] Starting client certificate rotation controller
	I0923 12:57:06.678102  682373 addons.go:234] Setting addon default-storageclass=true in "ha-097312"
	I0923 12:57:06.678152  682373 host.go:66] Checking if "ha-097312" exists ...
	I0923 12:57:06.678585  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:57:06.678637  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:57:06.691933  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39239
	I0923 12:57:06.692442  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:57:06.693010  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:57:06.693034  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:57:06.693367  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:57:06.693647  682373 main.go:141] libmachine: (ha-097312) Calling .GetState
	I0923 12:57:06.694766  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34185
	I0923 12:57:06.695192  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:57:06.695549  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:57:06.695721  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:57:06.695737  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:57:06.696032  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:57:06.696640  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:57:06.696692  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:57:06.698001  682373 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 12:57:06.699592  682373 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:57:06.699614  682373 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 12:57:06.699636  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:57:06.702740  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:57:06.703120  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:57:06.703136  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:57:06.703423  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:57:06.703599  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:57:06.703736  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:57:06.703871  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:57:06.713026  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44863
	I0923 12:57:06.713478  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:57:06.714138  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:57:06.714157  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:57:06.714441  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:57:06.714648  682373 main.go:141] libmachine: (ha-097312) Calling .GetState
	I0923 12:57:06.716436  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:57:06.716678  682373 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 12:57:06.716694  682373 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 12:57:06.716712  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:57:06.720029  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:57:06.720524  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:57:06.720549  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:57:06.720868  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:57:06.721094  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:57:06.721284  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:57:06.721415  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:57:06.794261  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 12:57:06.837196  682373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:57:06.948150  682373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 12:57:07.376765  682373 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0923 12:57:07.497295  682373 main.go:141] libmachine: Making call to close driver server
	I0923 12:57:07.497329  682373 main.go:141] libmachine: (ha-097312) Calling .Close
	I0923 12:57:07.497329  682373 main.go:141] libmachine: Making call to close driver server
	I0923 12:57:07.497348  682373 main.go:141] libmachine: (ha-097312) Calling .Close
	I0923 12:57:07.497659  682373 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:57:07.497676  682373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:57:07.497686  682373 main.go:141] libmachine: Making call to close driver server
	I0923 12:57:07.497695  682373 main.go:141] libmachine: (ha-097312) Calling .Close
	I0923 12:57:07.497795  682373 main.go:141] libmachine: (ha-097312) DBG | Closing plugin on server side
	I0923 12:57:07.497861  682373 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:57:07.497875  682373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:57:07.497884  682373 main.go:141] libmachine: Making call to close driver server
	I0923 12:57:07.497899  682373 main.go:141] libmachine: (ha-097312) Calling .Close
	I0923 12:57:07.497941  682373 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:57:07.497955  682373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:57:07.498024  682373 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 12:57:07.498041  682373 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 12:57:07.498159  682373 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:57:07.498194  682373 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0923 12:57:07.498211  682373 round_trippers.go:469] Request Headers:
	I0923 12:57:07.498225  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:57:07.498231  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:57:07.498235  682373 main.go:141] libmachine: (ha-097312) DBG | Closing plugin on server side
	I0923 12:57:07.498196  682373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:57:07.509952  682373 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0923 12:57:07.510797  682373 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0923 12:57:07.510817  682373 round_trippers.go:469] Request Headers:
	I0923 12:57:07.510829  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:57:07.510834  682373 round_trippers.go:473]     Content-Type: application/json
	I0923 12:57:07.510840  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:57:07.513677  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:57:07.513894  682373 main.go:141] libmachine: Making call to close driver server
	I0923 12:57:07.513920  682373 main.go:141] libmachine: (ha-097312) Calling .Close
	I0923 12:57:07.514234  682373 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:57:07.514256  682373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:57:07.516273  682373 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0923 12:57:07.517649  682373 addons.go:510] duration metric: took 861.992785ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0923 12:57:07.517685  682373 start.go:246] waiting for cluster config update ...
	I0923 12:57:07.517698  682373 start.go:255] writing updated cluster config ...
	I0923 12:57:07.519680  682373 out.go:201] 
	I0923 12:57:07.521371  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:57:07.521468  682373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 12:57:07.523127  682373 out.go:177] * Starting "ha-097312-m02" control-plane node in "ha-097312" cluster
	I0923 12:57:07.524508  682373 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 12:57:07.524539  682373 cache.go:56] Caching tarball of preloaded images
	I0923 12:57:07.524641  682373 preload.go:172] Found /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 12:57:07.524654  682373 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 12:57:07.524741  682373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 12:57:07.524952  682373 start.go:360] acquireMachinesLock for ha-097312-m02: {Name:mka98570d4b4becad22300323f1f88e64743eec3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 12:57:07.525025  682373 start.go:364] duration metric: took 44.618µs to acquireMachinesLock for "ha-097312-m02"
	I0923 12:57:07.525047  682373 start.go:93] Provisioning new machine with config: &{Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:57:07.525150  682373 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0923 12:57:07.527045  682373 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 12:57:07.527133  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:57:07.527160  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:57:07.542505  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36903
	I0923 12:57:07.542956  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:57:07.543542  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:57:07.543583  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:57:07.543972  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:57:07.544208  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetMachineName
	I0923 12:57:07.544349  682373 main.go:141] libmachine: (ha-097312-m02) Calling .DriverName
	I0923 12:57:07.544507  682373 start.go:159] libmachine.API.Create for "ha-097312" (driver="kvm2")
	I0923 12:57:07.544535  682373 client.go:168] LocalClient.Create starting
	I0923 12:57:07.544570  682373 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem
	I0923 12:57:07.544615  682373 main.go:141] libmachine: Decoding PEM data...
	I0923 12:57:07.544634  682373 main.go:141] libmachine: Parsing certificate...
	I0923 12:57:07.544717  682373 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem
	I0923 12:57:07.544765  682373 main.go:141] libmachine: Decoding PEM data...
	I0923 12:57:07.544805  682373 main.go:141] libmachine: Parsing certificate...
	I0923 12:57:07.544827  682373 main.go:141] libmachine: Running pre-create checks...
	I0923 12:57:07.544832  682373 main.go:141] libmachine: (ha-097312-m02) Calling .PreCreateCheck
	I0923 12:57:07.545067  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetConfigRaw
	I0923 12:57:07.545510  682373 main.go:141] libmachine: Creating machine...
	I0923 12:57:07.545532  682373 main.go:141] libmachine: (ha-097312-m02) Calling .Create
	I0923 12:57:07.545663  682373 main.go:141] libmachine: (ha-097312-m02) Creating KVM machine...
	I0923 12:57:07.547155  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found existing default KVM network
	I0923 12:57:07.547384  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found existing private KVM network mk-ha-097312
	I0923 12:57:07.547524  682373 main.go:141] libmachine: (ha-097312-m02) Setting up store path in /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02 ...
	I0923 12:57:07.547546  682373 main.go:141] libmachine: (ha-097312-m02) Building disk image from file:///home/jenkins/minikube-integration/19690-662205/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 12:57:07.547624  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:07.547504  682740 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:57:07.547712  682373 main.go:141] libmachine: (ha-097312-m02) Downloading /home/jenkins/minikube-integration/19690-662205/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19690-662205/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 12:57:07.802486  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:07.802340  682740 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/id_rsa...
	I0923 12:57:07.948816  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:07.948688  682740 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/ha-097312-m02.rawdisk...
	I0923 12:57:07.948868  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Writing magic tar header
	I0923 12:57:07.948878  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Writing SSH key tar header
	I0923 12:57:07.948886  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:07.948826  682740 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02 ...
	I0923 12:57:07.949014  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02
	I0923 12:57:07.949056  682373 main.go:141] libmachine: (ha-097312-m02) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02 (perms=drwx------)
	I0923 12:57:07.949066  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube/machines
	I0923 12:57:07.949084  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:57:07.949106  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205
	I0923 12:57:07.949118  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 12:57:07.949129  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Checking permissions on dir: /home/jenkins
	I0923 12:57:07.949139  682373 main.go:141] libmachine: (ha-097312-m02) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube/machines (perms=drwxr-xr-x)
	I0923 12:57:07.949156  682373 main.go:141] libmachine: (ha-097312-m02) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube (perms=drwxr-xr-x)
	I0923 12:57:07.949167  682373 main.go:141] libmachine: (ha-097312-m02) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205 (perms=drwxrwxr-x)
	I0923 12:57:07.949178  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Checking permissions on dir: /home
	I0923 12:57:07.949191  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Skipping /home - not owner
	I0923 12:57:07.949205  682373 main.go:141] libmachine: (ha-097312-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 12:57:07.949217  682373 main.go:141] libmachine: (ha-097312-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 12:57:07.949229  682373 main.go:141] libmachine: (ha-097312-m02) Creating domain...
	I0923 12:57:07.950603  682373 main.go:141] libmachine: (ha-097312-m02) define libvirt domain using xml: 
	I0923 12:57:07.950628  682373 main.go:141] libmachine: (ha-097312-m02) <domain type='kvm'>
	I0923 12:57:07.950638  682373 main.go:141] libmachine: (ha-097312-m02)   <name>ha-097312-m02</name>
	I0923 12:57:07.950648  682373 main.go:141] libmachine: (ha-097312-m02)   <memory unit='MiB'>2200</memory>
	I0923 12:57:07.950655  682373 main.go:141] libmachine: (ha-097312-m02)   <vcpu>2</vcpu>
	I0923 12:57:07.950665  682373 main.go:141] libmachine: (ha-097312-m02)   <features>
	I0923 12:57:07.950672  682373 main.go:141] libmachine: (ha-097312-m02)     <acpi/>
	I0923 12:57:07.950678  682373 main.go:141] libmachine: (ha-097312-m02)     <apic/>
	I0923 12:57:07.950685  682373 main.go:141] libmachine: (ha-097312-m02)     <pae/>
	I0923 12:57:07.950692  682373 main.go:141] libmachine: (ha-097312-m02)     
	I0923 12:57:07.950704  682373 main.go:141] libmachine: (ha-097312-m02)   </features>
	I0923 12:57:07.950712  682373 main.go:141] libmachine: (ha-097312-m02)   <cpu mode='host-passthrough'>
	I0923 12:57:07.950720  682373 main.go:141] libmachine: (ha-097312-m02)   
	I0923 12:57:07.950726  682373 main.go:141] libmachine: (ha-097312-m02)   </cpu>
	I0923 12:57:07.950755  682373 main.go:141] libmachine: (ha-097312-m02)   <os>
	I0923 12:57:07.950767  682373 main.go:141] libmachine: (ha-097312-m02)     <type>hvm</type>
	I0923 12:57:07.950775  682373 main.go:141] libmachine: (ha-097312-m02)     <boot dev='cdrom'/>
	I0923 12:57:07.950783  682373 main.go:141] libmachine: (ha-097312-m02)     <boot dev='hd'/>
	I0923 12:57:07.950795  682373 main.go:141] libmachine: (ha-097312-m02)     <bootmenu enable='no'/>
	I0923 12:57:07.950802  682373 main.go:141] libmachine: (ha-097312-m02)   </os>
	I0923 12:57:07.950814  682373 main.go:141] libmachine: (ha-097312-m02)   <devices>
	I0923 12:57:07.950825  682373 main.go:141] libmachine: (ha-097312-m02)     <disk type='file' device='cdrom'>
	I0923 12:57:07.950841  682373 main.go:141] libmachine: (ha-097312-m02)       <source file='/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/boot2docker.iso'/>
	I0923 12:57:07.950853  682373 main.go:141] libmachine: (ha-097312-m02)       <target dev='hdc' bus='scsi'/>
	I0923 12:57:07.950887  682373 main.go:141] libmachine: (ha-097312-m02)       <readonly/>
	I0923 12:57:07.950906  682373 main.go:141] libmachine: (ha-097312-m02)     </disk>
	I0923 12:57:07.950914  682373 main.go:141] libmachine: (ha-097312-m02)     <disk type='file' device='disk'>
	I0923 12:57:07.950920  682373 main.go:141] libmachine: (ha-097312-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 12:57:07.950931  682373 main.go:141] libmachine: (ha-097312-m02)       <source file='/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/ha-097312-m02.rawdisk'/>
	I0923 12:57:07.950938  682373 main.go:141] libmachine: (ha-097312-m02)       <target dev='hda' bus='virtio'/>
	I0923 12:57:07.950943  682373 main.go:141] libmachine: (ha-097312-m02)     </disk>
	I0923 12:57:07.950950  682373 main.go:141] libmachine: (ha-097312-m02)     <interface type='network'>
	I0923 12:57:07.950956  682373 main.go:141] libmachine: (ha-097312-m02)       <source network='mk-ha-097312'/>
	I0923 12:57:07.950962  682373 main.go:141] libmachine: (ha-097312-m02)       <model type='virtio'/>
	I0923 12:57:07.950967  682373 main.go:141] libmachine: (ha-097312-m02)     </interface>
	I0923 12:57:07.950973  682373 main.go:141] libmachine: (ha-097312-m02)     <interface type='network'>
	I0923 12:57:07.950979  682373 main.go:141] libmachine: (ha-097312-m02)       <source network='default'/>
	I0923 12:57:07.950988  682373 main.go:141] libmachine: (ha-097312-m02)       <model type='virtio'/>
	I0923 12:57:07.951022  682373 main.go:141] libmachine: (ha-097312-m02)     </interface>
	I0923 12:57:07.951047  682373 main.go:141] libmachine: (ha-097312-m02)     <serial type='pty'>
	I0923 12:57:07.951056  682373 main.go:141] libmachine: (ha-097312-m02)       <target port='0'/>
	I0923 12:57:07.951071  682373 main.go:141] libmachine: (ha-097312-m02)     </serial>
	I0923 12:57:07.951083  682373 main.go:141] libmachine: (ha-097312-m02)     <console type='pty'>
	I0923 12:57:07.951094  682373 main.go:141] libmachine: (ha-097312-m02)       <target type='serial' port='0'/>
	I0923 12:57:07.951104  682373 main.go:141] libmachine: (ha-097312-m02)     </console>
	I0923 12:57:07.951110  682373 main.go:141] libmachine: (ha-097312-m02)     <rng model='virtio'>
	I0923 12:57:07.951122  682373 main.go:141] libmachine: (ha-097312-m02)       <backend model='random'>/dev/random</backend>
	I0923 12:57:07.951132  682373 main.go:141] libmachine: (ha-097312-m02)     </rng>
	I0923 12:57:07.951139  682373 main.go:141] libmachine: (ha-097312-m02)     
	I0923 12:57:07.951147  682373 main.go:141] libmachine: (ha-097312-m02)     
	I0923 12:57:07.951155  682373 main.go:141] libmachine: (ha-097312-m02)   </devices>
	I0923 12:57:07.951170  682373 main.go:141] libmachine: (ha-097312-m02) </domain>
	I0923 12:57:07.951208  682373 main.go:141] libmachine: (ha-097312-m02) 
	I0923 12:57:07.958737  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:28:cf:23 in network default
	I0923 12:57:07.959212  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:07.959260  682373 main.go:141] libmachine: (ha-097312-m02) Ensuring networks are active...
	I0923 12:57:07.960010  682373 main.go:141] libmachine: (ha-097312-m02) Ensuring network default is active
	I0923 12:57:07.960399  682373 main.go:141] libmachine: (ha-097312-m02) Ensuring network mk-ha-097312 is active
	I0923 12:57:07.960872  682373 main.go:141] libmachine: (ha-097312-m02) Getting domain xml...
	I0923 12:57:07.961596  682373 main.go:141] libmachine: (ha-097312-m02) Creating domain...
	I0923 12:57:09.236958  682373 main.go:141] libmachine: (ha-097312-m02) Waiting to get IP...
	I0923 12:57:09.237872  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:09.238432  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:09.238520  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:09.238409  682740 retry.go:31] will retry after 258.996903ms: waiting for machine to come up
	I0923 12:57:09.498848  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:09.499271  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:09.499300  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:09.499216  682740 retry.go:31] will retry after 390.01253ms: waiting for machine to come up
	I0923 12:57:09.890994  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:09.891540  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:09.891572  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:09.891465  682740 retry.go:31] will retry after 371.935324ms: waiting for machine to come up
	I0923 12:57:10.265244  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:10.265618  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:10.265655  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:10.265585  682740 retry.go:31] will retry after 510.543016ms: waiting for machine to come up
	I0923 12:57:10.777241  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:10.777723  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:10.777746  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:10.777656  682740 retry.go:31] will retry after 522.337855ms: waiting for machine to come up
	I0923 12:57:11.302530  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:11.303002  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:11.303023  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:11.302970  682740 retry.go:31] will retry after 745.395576ms: waiting for machine to come up
	I0923 12:57:12.049866  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:12.050223  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:12.050249  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:12.050180  682740 retry.go:31] will retry after 791.252666ms: waiting for machine to come up
	I0923 12:57:12.842707  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:12.843212  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:12.843250  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:12.843171  682740 retry.go:31] will retry after 1.03083414s: waiting for machine to come up
	I0923 12:57:13.876177  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:13.876677  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:13.876711  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:13.876621  682740 retry.go:31] will retry after 1.686909518s: waiting for machine to come up
	I0923 12:57:15.565124  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:15.565550  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:15.565574  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:15.565500  682740 retry.go:31] will retry after 1.944756654s: waiting for machine to come up
	I0923 12:57:17.512182  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:17.512709  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:17.512742  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:17.512627  682740 retry.go:31] will retry after 2.056101086s: waiting for machine to come up
	I0923 12:57:19.569989  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:19.570397  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:19.570422  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:19.570360  682740 retry.go:31] will retry after 2.406826762s: waiting for machine to come up
	I0923 12:57:21.980169  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:21.980856  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:21.980887  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:21.980793  682740 retry.go:31] will retry after 3.38134268s: waiting for machine to come up
	I0923 12:57:25.364366  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:25.364892  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:25.364919  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:25.364848  682740 retry.go:31] will retry after 4.745352265s: waiting for machine to come up
	I0923 12:57:30.113738  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.114252  682373 main.go:141] libmachine: (ha-097312-m02) Found IP for machine: 192.168.39.214
	I0923 12:57:30.114286  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has current primary IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.114295  682373 main.go:141] libmachine: (ha-097312-m02) Reserving static IP address...
	I0923 12:57:30.114645  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find host DHCP lease matching {name: "ha-097312-m02", mac: "52:54:00:aa:9c:e4", ip: "192.168.39.214"} in network mk-ha-097312
	I0923 12:57:30.195004  682373 main.go:141] libmachine: (ha-097312-m02) Reserved static IP address: 192.168.39.214
	I0923 12:57:30.195029  682373 main.go:141] libmachine: (ha-097312-m02) Waiting for SSH to be available...
	I0923 12:57:30.195051  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Getting to WaitForSSH function...
	I0923 12:57:30.198064  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.198485  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:minikube Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:30.198516  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.198655  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Using SSH client type: external
	I0923 12:57:30.198683  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/id_rsa (-rw-------)
	I0923 12:57:30.198704  682373 main.go:141] libmachine: (ha-097312-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 12:57:30.198716  682373 main.go:141] libmachine: (ha-097312-m02) DBG | About to run SSH command:
	I0923 12:57:30.198732  682373 main.go:141] libmachine: (ha-097312-m02) DBG | exit 0
	I0923 12:57:30.322102  682373 main.go:141] libmachine: (ha-097312-m02) DBG | SSH cmd err, output: <nil>: 
	I0923 12:57:30.322535  682373 main.go:141] libmachine: (ha-097312-m02) KVM machine creation complete!
	I0923 12:57:30.322889  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetConfigRaw
	I0923 12:57:30.324198  682373 main.go:141] libmachine: (ha-097312-m02) Calling .DriverName
	I0923 12:57:30.325129  682373 main.go:141] libmachine: (ha-097312-m02) Calling .DriverName
	I0923 12:57:30.325321  682373 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 12:57:30.325347  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetState
	I0923 12:57:30.327097  682373 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 12:57:30.327120  682373 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 12:57:30.327127  682373 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 12:57:30.327136  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:30.330398  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.330831  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:30.330856  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.331084  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:30.331333  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:30.331567  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:30.331779  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:30.331980  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:57:30.332285  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0923 12:57:30.332308  682373 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 12:57:30.433384  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:57:30.433417  682373 main.go:141] libmachine: Detecting the provisioner...
	I0923 12:57:30.433425  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:30.436332  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.436753  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:30.436787  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.436960  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:30.437226  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:30.437407  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:30.437534  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:30.437680  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:57:30.437907  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0923 12:57:30.437921  682373 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 12:57:30.542610  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 12:57:30.542690  682373 main.go:141] libmachine: found compatible host: buildroot
	I0923 12:57:30.542698  682373 main.go:141] libmachine: Provisioning with buildroot...
	I0923 12:57:30.542708  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetMachineName
	I0923 12:57:30.543041  682373 buildroot.go:166] provisioning hostname "ha-097312-m02"
	I0923 12:57:30.543071  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetMachineName
	I0923 12:57:30.543236  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:30.546448  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.546897  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:30.546919  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.547099  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:30.547300  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:30.547478  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:30.547640  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:30.547814  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:57:30.548056  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0923 12:57:30.548076  682373 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-097312-m02 && echo "ha-097312-m02" | sudo tee /etc/hostname
	I0923 12:57:30.664801  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-097312-m02
	
	I0923 12:57:30.664827  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:30.668130  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.668523  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:30.668560  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.668734  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:30.668953  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:30.669161  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:30.669310  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:30.669479  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:57:30.669670  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0923 12:57:30.669692  682373 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-097312-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-097312-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-097312-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:57:30.782645  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:57:30.782678  682373 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19690-662205/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-662205/.minikube}
	I0923 12:57:30.782699  682373 buildroot.go:174] setting up certificates
	I0923 12:57:30.782714  682373 provision.go:84] configureAuth start
	I0923 12:57:30.782725  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetMachineName
	I0923 12:57:30.783040  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetIP
	I0923 12:57:30.785945  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.786433  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:30.786470  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.786603  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:30.788815  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.789202  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:30.789235  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.789394  682373 provision.go:143] copyHostCerts
	I0923 12:57:30.789433  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 12:57:30.789475  682373 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem, removing ...
	I0923 12:57:30.789485  682373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 12:57:30.789576  682373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem (1082 bytes)
	I0923 12:57:30.789670  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 12:57:30.789696  682373 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem, removing ...
	I0923 12:57:30.789707  682373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 12:57:30.789745  682373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem (1123 bytes)
	I0923 12:57:30.789814  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 12:57:30.789859  682373 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem, removing ...
	I0923 12:57:30.789868  682373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 12:57:30.789903  682373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem (1675 bytes)
	I0923 12:57:30.789977  682373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem org=jenkins.ha-097312-m02 san=[127.0.0.1 192.168.39.214 ha-097312-m02 localhost minikube]
	I0923 12:57:30.922412  682373 provision.go:177] copyRemoteCerts
	I0923 12:57:30.922481  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:57:30.922511  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:30.925683  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.926050  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:30.926084  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.926274  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:30.926483  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:30.926675  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:30.926797  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/id_rsa Username:docker}
	I0923 12:57:31.008599  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 12:57:31.008683  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 12:57:31.033933  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 12:57:31.034023  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 12:57:31.058490  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 12:57:31.058585  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 12:57:31.083172  682373 provision.go:87] duration metric: took 300.435238ms to configureAuth
	I0923 12:57:31.083208  682373 buildroot.go:189] setting minikube options for container-runtime
	I0923 12:57:31.083452  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:57:31.083557  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:31.086620  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.087006  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:31.087040  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.087226  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:31.087462  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:31.087673  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:31.087823  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:31.088047  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:57:31.088262  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0923 12:57:31.088294  682373 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 12:57:31.308105  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 12:57:31.308130  682373 main.go:141] libmachine: Checking connection to Docker...
	I0923 12:57:31.308138  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetURL
	I0923 12:57:31.309535  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Using libvirt version 6000000
	I0923 12:57:31.312541  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.312973  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:31.313010  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.313204  682373 main.go:141] libmachine: Docker is up and running!
	I0923 12:57:31.313219  682373 main.go:141] libmachine: Reticulating splines...
	I0923 12:57:31.313229  682373 client.go:171] duration metric: took 23.76868403s to LocalClient.Create
	I0923 12:57:31.313256  682373 start.go:167] duration metric: took 23.768751533s to libmachine.API.Create "ha-097312"
	I0923 12:57:31.313265  682373 start.go:293] postStartSetup for "ha-097312-m02" (driver="kvm2")
	I0923 12:57:31.313279  682373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:57:31.313296  682373 main.go:141] libmachine: (ha-097312-m02) Calling .DriverName
	I0923 12:57:31.313570  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:57:31.313596  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:31.315984  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.316386  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:31.316408  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.316617  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:31.316830  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:31.316990  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:31.317121  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/id_rsa Username:docker}
	I0923 12:57:31.400827  682373 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:57:31.404978  682373 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 12:57:31.405008  682373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/addons for local assets ...
	I0923 12:57:31.405090  682373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/files for local assets ...
	I0923 12:57:31.405188  682373 filesync.go:149] local asset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> 6694472.pem in /etc/ssl/certs
	I0923 12:57:31.405202  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> /etc/ssl/certs/6694472.pem
	I0923 12:57:31.405345  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 12:57:31.415010  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 12:57:31.439229  682373 start.go:296] duration metric: took 125.945282ms for postStartSetup
	I0923 12:57:31.439312  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetConfigRaw
	I0923 12:57:31.439949  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetIP
	I0923 12:57:31.442989  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.443357  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:31.443391  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.443654  682373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 12:57:31.443870  682373 start.go:128] duration metric: took 23.918708009s to createHost
	I0923 12:57:31.443895  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:31.446222  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.446579  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:31.446608  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.446760  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:31.446969  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:31.447132  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:31.447282  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:31.447456  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:57:31.447638  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0923 12:57:31.447648  682373 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 12:57:31.550685  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727096251.508834892
	
	I0923 12:57:31.550719  682373 fix.go:216] guest clock: 1727096251.508834892
	I0923 12:57:31.550731  682373 fix.go:229] Guest: 2024-09-23 12:57:31.508834892 +0000 UTC Remote: 2024-09-23 12:57:31.443883765 +0000 UTC m=+69.652378832 (delta=64.951127ms)
	I0923 12:57:31.550757  682373 fix.go:200] guest clock delta is within tolerance: 64.951127ms
	I0923 12:57:31.550765  682373 start.go:83] releasing machines lock for "ha-097312-m02", held for 24.025730497s
	I0923 12:57:31.550798  682373 main.go:141] libmachine: (ha-097312-m02) Calling .DriverName
	I0923 12:57:31.551124  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetIP
	I0923 12:57:31.554365  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.554798  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:31.554829  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.557342  682373 out.go:177] * Found network options:
	I0923 12:57:31.558765  682373 out.go:177]   - NO_PROXY=192.168.39.160
	W0923 12:57:31.560271  682373 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 12:57:31.560309  682373 main.go:141] libmachine: (ha-097312-m02) Calling .DriverName
	I0923 12:57:31.561020  682373 main.go:141] libmachine: (ha-097312-m02) Calling .DriverName
	I0923 12:57:31.561228  682373 main.go:141] libmachine: (ha-097312-m02) Calling .DriverName
	I0923 12:57:31.561372  682373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 12:57:31.561417  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	W0923 12:57:31.561455  682373 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 12:57:31.561533  682373 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 12:57:31.561554  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:31.564108  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.564231  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.564516  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:31.564549  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.564574  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:31.564586  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.564758  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:31.564856  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:31.564956  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:31.565019  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:31.565102  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:31.565177  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:31.565238  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/id_rsa Username:docker}
	I0923 12:57:31.565280  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/id_rsa Username:docker}
	I0923 12:57:31.802089  682373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 12:57:31.808543  682373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 12:57:31.808622  682373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 12:57:31.824457  682373 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 12:57:31.824502  682373 start.go:495] detecting cgroup driver to use...
	I0923 12:57:31.824591  682373 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 12:57:31.842591  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:57:31.857349  682373 docker.go:217] disabling cri-docker service (if available) ...
	I0923 12:57:31.857432  682373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 12:57:31.871118  682373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 12:57:31.884433  682373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 12:57:31.998506  682373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 12:57:32.140771  682373 docker.go:233] disabling docker service ...
	I0923 12:57:32.140848  682373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 12:57:32.154917  682373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 12:57:32.167722  682373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 12:57:32.306721  682373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 12:57:32.442305  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 12:57:32.455563  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:57:32.473584  682373 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 12:57:32.473664  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:57:32.483856  682373 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 12:57:32.483926  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:57:32.493889  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:57:32.503832  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:57:32.514226  682373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 12:57:32.524620  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:57:32.534430  682373 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:57:32.550444  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:57:32.560917  682373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 12:57:32.570816  682373 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 12:57:32.570878  682373 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 12:57:32.583098  682373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 12:57:32.592948  682373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:57:32.720270  682373 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 12:57:32.812338  682373 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 12:57:32.812420  682373 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 12:57:32.817090  682373 start.go:563] Will wait 60s for crictl version
	I0923 12:57:32.817148  682373 ssh_runner.go:195] Run: which crictl
	I0923 12:57:32.820890  682373 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 12:57:32.862384  682373 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 12:57:32.862475  682373 ssh_runner.go:195] Run: crio --version
	I0923 12:57:32.889442  682373 ssh_runner.go:195] Run: crio --version
	I0923 12:57:32.919399  682373 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 12:57:32.921499  682373 out.go:177]   - env NO_PROXY=192.168.39.160
	I0923 12:57:32.923091  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetIP
	I0923 12:57:32.926243  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:32.926570  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:32.926593  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:32.926824  682373 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 12:57:32.930826  682373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:57:32.942746  682373 mustload.go:65] Loading cluster: ha-097312
	I0923 12:57:32.942993  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:57:32.943344  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:57:32.943396  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:57:32.959345  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45567
	I0923 12:57:32.959837  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:57:32.960440  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:57:32.960462  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:57:32.960839  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:57:32.961073  682373 main.go:141] libmachine: (ha-097312) Calling .GetState
	I0923 12:57:32.962981  682373 host.go:66] Checking if "ha-097312" exists ...
	I0923 12:57:32.963304  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:57:32.963359  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:57:32.979062  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33299
	I0923 12:57:32.979655  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:57:32.980147  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:57:32.980171  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:57:32.980553  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:57:32.980783  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:57:32.980997  682373 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312 for IP: 192.168.39.214
	I0923 12:57:32.981024  682373 certs.go:194] generating shared ca certs ...
	I0923 12:57:32.981042  682373 certs.go:226] acquiring lock for ca certs: {Name:mk5f47b34d40554f07f6507fea971236e4735d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:57:32.981215  682373 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key
	I0923 12:57:32.981259  682373 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key
	I0923 12:57:32.981266  682373 certs.go:256] generating profile certs ...
	I0923 12:57:32.981360  682373 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.key
	I0923 12:57:32.981395  682373 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.61cdc51f
	I0923 12:57:32.981420  682373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.61cdc51f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.160 192.168.39.214 192.168.39.254]
	I0923 12:57:33.071795  682373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.61cdc51f ...
	I0923 12:57:33.071829  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.61cdc51f: {Name:mk62bd79cb1d47d4e42d7ff40584a205e823ac92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:57:33.072049  682373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.61cdc51f ...
	I0923 12:57:33.072069  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.61cdc51f: {Name:mk7d02454991cfe0917d276979b247a33b0bbebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:57:33.072179  682373 certs.go:381] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.61cdc51f -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt
	I0923 12:57:33.072334  682373 certs.go:385] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.61cdc51f -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key
	I0923 12:57:33.072469  682373 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key
	I0923 12:57:33.072488  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 12:57:33.072504  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 12:57:33.072515  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 12:57:33.072525  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 12:57:33.072541  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 12:57:33.072553  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 12:57:33.072563  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 12:57:33.072575  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 12:57:33.072624  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem (1338 bytes)
	W0923 12:57:33.072650  682373 certs.go:480] ignoring /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447_empty.pem, impossibly tiny 0 bytes
	I0923 12:57:33.072659  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 12:57:33.072682  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem (1082 bytes)
	I0923 12:57:33.072703  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem (1123 bytes)
	I0923 12:57:33.072727  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem (1675 bytes)
	I0923 12:57:33.072766  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 12:57:33.072809  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> /usr/share/ca-certificates/6694472.pem
	I0923 12:57:33.072831  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:57:33.072841  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem -> /usr/share/ca-certificates/669447.pem
	I0923 12:57:33.072884  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:57:33.076209  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:57:33.076612  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:57:33.076643  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:57:33.076790  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:57:33.077013  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:57:33.077175  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:57:33.077328  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:57:33.154333  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0923 12:57:33.159047  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0923 12:57:33.170550  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0923 12:57:33.175236  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0923 12:57:33.186589  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0923 12:57:33.192195  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0923 12:57:33.206938  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0923 12:57:33.211432  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0923 12:57:33.222459  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0923 12:57:33.226550  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0923 12:57:33.237861  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0923 12:57:33.242413  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1671 bytes)
	I0923 12:57:33.252582  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 12:57:33.276338  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 12:57:33.301928  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 12:57:33.327107  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 12:57:33.353167  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0923 12:57:33.377281  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 12:57:33.401324  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 12:57:33.426736  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 12:57:33.451659  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /usr/share/ca-certificates/6694472.pem (1708 bytes)
	I0923 12:57:33.475444  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 12:57:33.500205  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem --> /usr/share/ca-certificates/669447.pem (1338 bytes)
	I0923 12:57:33.524995  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0923 12:57:33.542090  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0923 12:57:33.558637  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0923 12:57:33.577724  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0923 12:57:33.595235  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0923 12:57:33.613246  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1671 bytes)
	I0923 12:57:33.629756  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0923 12:57:33.646976  682373 ssh_runner.go:195] Run: openssl version
	I0923 12:57:33.652839  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 12:57:33.665921  682373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:57:33.671324  682373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 12:28 /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:57:33.671395  682373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:57:33.677752  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 12:57:33.688883  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669447.pem && ln -fs /usr/share/ca-certificates/669447.pem /etc/ssl/certs/669447.pem"
	I0923 12:57:33.699858  682373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669447.pem
	I0923 12:57:33.704184  682373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 12:47 /usr/share/ca-certificates/669447.pem
	I0923 12:57:33.704258  682373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669447.pem
	I0923 12:57:33.709888  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/669447.pem /etc/ssl/certs/51391683.0"
	I0923 12:57:33.720601  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6694472.pem && ln -fs /usr/share/ca-certificates/6694472.pem /etc/ssl/certs/6694472.pem"
	I0923 12:57:33.731770  682373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6694472.pem
	I0923 12:57:33.736581  682373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 12:47 /usr/share/ca-certificates/6694472.pem
	I0923 12:57:33.736662  682373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6694472.pem
	I0923 12:57:33.742744  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6694472.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 12:57:33.754098  682373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 12:57:33.758320  682373 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 12:57:33.758398  682373 kubeadm.go:934] updating node {m02 192.168.39.214 8443 v1.31.1 crio true true} ...
	I0923 12:57:33.758510  682373 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-097312-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 12:57:33.758543  682373 kube-vip.go:115] generating kube-vip config ...
	I0923 12:57:33.758604  682373 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 12:57:33.773852  682373 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 12:57:33.773946  682373 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0923 12:57:33.774016  682373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 12:57:33.784005  682373 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0923 12:57:33.784077  682373 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0923 12:57:33.795537  682373 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0923 12:57:33.795576  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 12:57:33.795628  682373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 12:57:33.795645  682373 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0923 12:57:33.795645  682373 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0923 12:57:33.800211  682373 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0923 12:57:33.800250  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0923 12:57:34.690726  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 12:57:34.690835  682373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 12:57:34.695973  682373 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0923 12:57:34.696015  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0923 12:57:34.821772  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:57:34.859449  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 12:57:34.859576  682373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 12:57:34.865043  682373 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0923 12:57:34.865081  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0923 12:57:35.467374  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0923 12:57:35.477615  682373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0923 12:57:35.494947  682373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 12:57:35.511461  682373 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0923 12:57:35.528089  682373 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0923 12:57:35.532321  682373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:57:35.545355  682373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:57:35.675932  682373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:57:35.693246  682373 host.go:66] Checking if "ha-097312" exists ...
	I0923 12:57:35.693787  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:57:35.693897  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:57:35.709354  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42031
	I0923 12:57:35.709824  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:57:35.710378  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:57:35.710405  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:57:35.710810  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:57:35.711063  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:57:35.711227  682373 start.go:317] joinCluster: &{Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:57:35.711360  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0923 12:57:35.711378  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:57:35.714477  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:57:35.714953  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:57:35.714989  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:57:35.715229  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:57:35.715442  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:57:35.715639  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:57:35.715775  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:57:35.872553  682373 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:57:35.872604  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xyxxia.g4s5n9l2o4j0fmlt --discovery-token-ca-cert-hash sha256:3fc29dc81bde6bbaef9ddbc91342eaa216189e2d814cc53e215aada75bebb1ff --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-097312-m02 --control-plane --apiserver-advertise-address=192.168.39.214 --apiserver-bind-port=8443"
	I0923 12:57:59.258533  682373 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xyxxia.g4s5n9l2o4j0fmlt --discovery-token-ca-cert-hash sha256:3fc29dc81bde6bbaef9ddbc91342eaa216189e2d814cc53e215aada75bebb1ff --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-097312-m02 --control-plane --apiserver-advertise-address=192.168.39.214 --apiserver-bind-port=8443": (23.385898049s)
	I0923 12:57:59.258586  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0923 12:57:59.796861  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-097312-m02 minikube.k8s.io/updated_at=2024_09_23T12_57_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=ha-097312 minikube.k8s.io/primary=false
	I0923 12:57:59.924798  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-097312-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0923 12:58:00.039331  682373 start.go:319] duration metric: took 24.32808596s to joinCluster
	I0923 12:58:00.039429  682373 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:58:00.039711  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:58:00.041025  682373 out.go:177] * Verifying Kubernetes components...
	I0923 12:58:00.042555  682373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:58:00.236705  682373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:58:00.254117  682373 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 12:58:00.254361  682373 kapi.go:59] client config for ha-097312: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.crt", KeyFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.key", CAFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0923 12:58:00.254428  682373 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.160:8443
	I0923 12:58:00.254651  682373 node_ready.go:35] waiting up to 6m0s for node "ha-097312-m02" to be "Ready" ...
	I0923 12:58:00.254771  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:00.254779  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:00.254788  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:00.254792  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:00.285534  682373 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I0923 12:58:00.755122  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:00.755151  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:00.755162  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:00.755168  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:00.759795  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:01.254994  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:01.255020  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:01.255029  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:01.255034  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:01.269257  682373 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0923 12:58:01.755083  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:01.755109  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:01.755117  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:01.755121  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:01.759623  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:02.255610  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:02.255632  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:02.255641  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:02.255645  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:02.259196  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:02.259691  682373 node_ready.go:53] node "ha-097312-m02" has status "Ready":"False"
	I0923 12:58:02.755738  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:02.755768  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:02.755777  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:02.755781  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:02.759269  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:03.255079  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:03.255106  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:03.255115  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:03.255120  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:03.259155  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:03.755217  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:03.755244  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:03.755251  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:03.755255  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:03.759086  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:04.255149  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:04.255177  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:04.255187  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:04.255193  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:04.259605  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:04.260038  682373 node_ready.go:53] node "ha-097312-m02" has status "Ready":"False"
	I0923 12:58:04.755404  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:04.755434  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:04.755446  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:04.755452  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:04.762670  682373 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:58:05.255127  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:05.255157  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:05.255166  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:05.255172  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:05.259007  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:05.755425  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:05.755458  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:05.755470  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:05.755475  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:05.759105  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:06.255090  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:06.255119  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:06.255128  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:06.255134  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:06.259815  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:06.260439  682373 node_ready.go:53] node "ha-097312-m02" has status "Ready":"False"
	I0923 12:58:06.755181  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:06.755209  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:06.755219  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:06.755226  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:06.758768  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:07.255412  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:07.255447  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:07.255458  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:07.255466  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:07.258578  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:07.755939  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:07.755966  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:07.755975  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:07.755978  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:07.759564  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:08.255677  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:08.255716  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:08.255730  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:08.255735  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:08.259088  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:08.754970  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:08.755000  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:08.755012  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:08.755020  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:08.758314  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:08.758910  682373 node_ready.go:53] node "ha-097312-m02" has status "Ready":"False"
	I0923 12:58:09.256074  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:09.256105  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:09.256115  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:09.256120  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:09.259267  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:09.754981  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:09.755005  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:09.755014  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:09.755019  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:09.758517  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:10.255140  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:10.255164  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:10.255173  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:10.255178  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:10.261151  682373 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:58:10.755682  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:10.755711  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:10.755722  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:10.755728  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:10.759364  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:10.759961  682373 node_ready.go:53] node "ha-097312-m02" has status "Ready":"False"
	I0923 12:58:11.255328  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:11.255355  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:11.255363  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:11.255367  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:11.259613  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:11.755288  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:11.755316  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:11.755331  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:11.755336  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:11.759266  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:12.255138  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:12.255270  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:12.255308  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:12.255317  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:12.259134  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:12.755572  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:12.755596  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:12.755604  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:12.755610  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:12.758861  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:13.255907  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:13.255934  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:13.255942  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:13.255946  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:13.259259  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:13.259818  682373 node_ready.go:53] node "ha-097312-m02" has status "Ready":"False"
	I0923 12:58:13.755217  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:13.755243  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:13.755251  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:13.755255  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:13.759226  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:14.255176  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:14.255208  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:14.255219  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:14.255226  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:14.258744  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:14.755918  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:14.755946  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:14.755953  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:14.755957  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:14.759652  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:15.255703  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:15.255732  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:15.255745  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:15.255754  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:15.259193  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:15.755854  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:15.755888  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:15.755896  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:15.755900  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:15.759137  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:15.759696  682373 node_ready.go:53] node "ha-097312-m02" has status "Ready":"False"
	I0923 12:58:16.255882  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:16.255910  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:16.255918  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:16.255922  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:16.259597  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:16.755835  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:16.755869  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:16.755887  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:16.755896  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:16.759860  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:17.255730  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:17.255754  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:17.255769  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:17.255773  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:17.259628  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:17.755085  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:17.755111  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:17.755119  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:17.755124  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:17.759249  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:17.759743  682373 node_ready.go:53] node "ha-097312-m02" has status "Ready":"False"
	I0923 12:58:18.255184  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:18.255211  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.255225  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.255242  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.259648  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:18.754896  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:18.754921  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.754930  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.754935  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.759143  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:18.759759  682373 node_ready.go:49] node "ha-097312-m02" has status "Ready":"True"
	I0923 12:58:18.759779  682373 node_ready.go:38] duration metric: took 18.505092333s for node "ha-097312-m02" to be "Ready" ...
	I0923 12:58:18.759789  682373 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:58:18.759872  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:58:18.759882  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.759890  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.759895  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.765186  682373 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:58:18.771234  682373 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6g9x2" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:18.771365  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6g9x2
	I0923 12:58:18.771376  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.771387  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.771396  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.775100  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:18.775960  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:18.775983  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.775993  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.776003  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.779024  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:18.779526  682373 pod_ready.go:93] pod "coredns-7c65d6cfc9-6g9x2" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:18.779547  682373 pod_ready.go:82] duration metric: took 8.277628ms for pod "coredns-7c65d6cfc9-6g9x2" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:18.779561  682373 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-txcxz" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:18.779632  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-txcxz
	I0923 12:58:18.779642  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.779652  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.779659  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.782895  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:18.783552  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:18.783573  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.783582  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.783588  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.786568  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:58:18.787170  682373 pod_ready.go:93] pod "coredns-7c65d6cfc9-txcxz" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:18.787189  682373 pod_ready.go:82] duration metric: took 7.619712ms for pod "coredns-7c65d6cfc9-txcxz" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:18.787202  682373 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:18.787274  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/etcd-ha-097312
	I0923 12:58:18.787284  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.787295  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.787303  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.792015  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:18.792787  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:18.792809  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.792820  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.792826  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.796338  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:18.796833  682373 pod_ready.go:93] pod "etcd-ha-097312" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:18.796854  682373 pod_ready.go:82] duration metric: took 9.643589ms for pod "etcd-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:18.796863  682373 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:18.796938  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/etcd-ha-097312-m02
	I0923 12:58:18.796951  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.796958  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.796962  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.800096  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:18.800646  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:18.800664  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.800675  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.800680  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.803250  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:58:18.803795  682373 pod_ready.go:93] pod "etcd-ha-097312-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:18.803820  682373 pod_ready.go:82] duration metric: took 6.946045ms for pod "etcd-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:18.803842  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:18.955292  682373 request.go:632] Waited for 151.365865ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312
	I0923 12:58:18.955373  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312
	I0923 12:58:18.955378  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.955388  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.955394  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.959155  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:19.155346  682373 request.go:632] Waited for 195.422034ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:19.155457  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:19.155466  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:19.155481  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:19.155491  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:19.158847  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:19.159413  682373 pod_ready.go:93] pod "kube-apiserver-ha-097312" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:19.159433  682373 pod_ready.go:82] duration metric: took 355.582451ms for pod "kube-apiserver-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:19.159446  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:19.355524  682373 request.go:632] Waited for 195.972937ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312-m02
	I0923 12:58:19.355603  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312-m02
	I0923 12:58:19.355611  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:19.355624  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:19.355634  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:19.358947  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:19.555060  682373 request.go:632] Waited for 195.299012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:19.555156  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:19.555165  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:19.555173  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:19.555180  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:19.558664  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:19.559169  682373 pod_ready.go:93] pod "kube-apiserver-ha-097312-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:19.559189  682373 pod_ready.go:82] duration metric: took 399.735219ms for pod "kube-apiserver-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:19.559199  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:19.755252  682373 request.go:632] Waited for 195.975758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312
	I0923 12:58:19.755347  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312
	I0923 12:58:19.755367  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:19.755395  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:19.755406  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:19.759281  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:19.955410  682373 request.go:632] Waited for 195.442789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:19.955490  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:19.955495  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:19.955504  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:19.955551  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:19.960116  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:19.960952  682373 pod_ready.go:93] pod "kube-controller-manager-ha-097312" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:19.960978  682373 pod_ready.go:82] duration metric: took 401.771647ms for pod "kube-controller-manager-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:19.960989  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:20.155181  682373 request.go:632] Waited for 194.10652ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312-m02
	I0923 12:58:20.155288  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312-m02
	I0923 12:58:20.155299  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:20.155307  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:20.155311  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:20.158904  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:20.355343  682373 request.go:632] Waited for 195.400275ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:20.355420  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:20.355425  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:20.355434  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:20.355440  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:20.358631  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:20.359159  682373 pod_ready.go:93] pod "kube-controller-manager-ha-097312-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:20.359188  682373 pod_ready.go:82] duration metric: took 398.191037ms for pod "kube-controller-manager-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:20.359202  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-drj8m" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:20.555330  682373 request.go:632] Waited for 196.021107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-drj8m
	I0923 12:58:20.555406  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-drj8m
	I0923 12:58:20.555412  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:20.555420  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:20.555430  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:20.559151  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:20.755254  682373 request.go:632] Waited for 195.454293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:20.755335  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:20.755340  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:20.755347  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:20.755351  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:20.759445  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:20.760118  682373 pod_ready.go:93] pod "kube-proxy-drj8m" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:20.760139  682373 pod_ready.go:82] duration metric: took 400.929533ms for pod "kube-proxy-drj8m" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:20.760148  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z6ss5" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:20.955378  682373 request.go:632] Waited for 195.139639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z6ss5
	I0923 12:58:20.955478  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z6ss5
	I0923 12:58:20.955488  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:20.955496  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:20.955517  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:20.959839  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:21.155010  682373 request.go:632] Waited for 194.343151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:21.155079  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:21.155084  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:21.155092  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:21.155096  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:21.158450  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:21.158954  682373 pod_ready.go:93] pod "kube-proxy-z6ss5" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:21.158974  682373 pod_ready.go:82] duration metric: took 398.819585ms for pod "kube-proxy-z6ss5" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:21.158984  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:21.355051  682373 request.go:632] Waited for 195.979167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312
	I0923 12:58:21.355148  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312
	I0923 12:58:21.355153  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:21.355161  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:21.355166  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:21.359586  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:21.554981  682373 request.go:632] Waited for 194.336515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:21.555072  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:21.555080  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:21.555090  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:21.555099  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:21.558426  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:21.558962  682373 pod_ready.go:93] pod "kube-scheduler-ha-097312" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:21.558988  682373 pod_ready.go:82] duration metric: took 399.997577ms for pod "kube-scheduler-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:21.558999  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:21.755254  682373 request.go:632] Waited for 196.12462ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312-m02
	I0923 12:58:21.755345  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312-m02
	I0923 12:58:21.755351  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:21.755359  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:21.755363  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:21.759215  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:21.955895  682373 request.go:632] Waited for 196.121213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:21.955983  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:21.955989  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:21.955996  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:21.956001  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:21.960399  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:21.960900  682373 pod_ready.go:93] pod "kube-scheduler-ha-097312-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:21.960922  682373 pod_ready.go:82] duration metric: took 401.915303ms for pod "kube-scheduler-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:21.960933  682373 pod_ready.go:39] duration metric: took 3.201132427s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:58:21.960950  682373 api_server.go:52] waiting for apiserver process to appear ...
	I0923 12:58:21.961025  682373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:58:21.980626  682373 api_server.go:72] duration metric: took 21.941154667s to wait for apiserver process to appear ...
	I0923 12:58:21.980660  682373 api_server.go:88] waiting for apiserver healthz status ...
	I0923 12:58:21.980684  682373 api_server.go:253] Checking apiserver healthz at https://192.168.39.160:8443/healthz ...
	I0923 12:58:21.985481  682373 api_server.go:279] https://192.168.39.160:8443/healthz returned 200:
	ok
	I0923 12:58:21.985563  682373 round_trippers.go:463] GET https://192.168.39.160:8443/version
	I0923 12:58:21.985574  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:21.985582  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:21.985586  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:21.986808  682373 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 12:58:21.987069  682373 api_server.go:141] control plane version: v1.31.1
	I0923 12:58:21.987104  682373 api_server.go:131] duration metric: took 6.43733ms to wait for apiserver health ...
	I0923 12:58:21.987113  682373 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 12:58:22.155587  682373 request.go:632] Waited for 168.378674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:58:22.155651  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:58:22.155657  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:22.155665  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:22.155669  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:22.166855  682373 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0923 12:58:22.174103  682373 system_pods.go:59] 17 kube-system pods found
	I0923 12:58:22.174149  682373 system_pods.go:61] "coredns-7c65d6cfc9-6g9x2" [af485e47-0e78-483e-8f35-a7a4ab53f014] Running
	I0923 12:58:22.174157  682373 system_pods.go:61] "coredns-7c65d6cfc9-txcxz" [e6da5f25-f232-4649-9801-f3577210ea2e] Running
	I0923 12:58:22.174164  682373 system_pods.go:61] "etcd-ha-097312" [7f27c05d-176f-4397-8966-a2cc29556265] Running
	I0923 12:58:22.174170  682373 system_pods.go:61] "etcd-ha-097312-m02" [50d4b55f-31d3-4351-8574-506bbc4167d6] Running
	I0923 12:58:22.174176  682373 system_pods.go:61] "kindnet-hcclj" [0e57c02a-6f9f-4829-9838-6bed660540a4] Running
	I0923 12:58:22.174182  682373 system_pods.go:61] "kindnet-j8l5t" [49216705-6e85-4b98-afbd-f4228b774321] Running
	I0923 12:58:22.174188  682373 system_pods.go:61] "kube-apiserver-ha-097312" [4b8954a1-188a-4734-8e79-eace293c35e9] Running
	I0923 12:58:22.174194  682373 system_pods.go:61] "kube-apiserver-ha-097312-m02" [6022c193-400e-4641-8c4d-d24f0ce3e6ea] Running
	I0923 12:58:22.174199  682373 system_pods.go:61] "kube-controller-manager-ha-097312" [c085db05-26f3-471b-baf1-f90cbfdacf19] Running
	I0923 12:58:22.174205  682373 system_pods.go:61] "kube-controller-manager-ha-097312-m02" [4cc903b8-c0c1-4ef7-9338-44af86be9280] Running
	I0923 12:58:22.174214  682373 system_pods.go:61] "kube-proxy-drj8m" [a1c5535e-7139-441f-9065-ef7d147582d2] Running
	I0923 12:58:22.174226  682373 system_pods.go:61] "kube-proxy-z6ss5" [7bff6204-a427-48c8-83a3-448ff1328b1b] Running
	I0923 12:58:22.174233  682373 system_pods.go:61] "kube-scheduler-ha-097312" [408ec8ae-eeca-4026-9582-45e7d209f09c] Running
	I0923 12:58:22.174240  682373 system_pods.go:61] "kube-scheduler-ha-097312-m02" [71e7793e-3d21-476a-84de-6fc84631e313] Running
	I0923 12:58:22.174247  682373 system_pods.go:61] "kube-vip-ha-097312" [b26dfdf8-fa4b-4822-a88c-fe7af53be81b] Running
	I0923 12:58:22.174253  682373 system_pods.go:61] "kube-vip-ha-097312-m02" [910ae281-c533-4aa6-acb0-c1b69dddd842] Running
	I0923 12:58:22.174264  682373 system_pods.go:61] "storage-provisioner" [0bbda806-091c-4e48-982a-296bbf03abd6] Running
	I0923 12:58:22.174277  682373 system_pods.go:74] duration metric: took 187.156047ms to wait for pod list to return data ...
	I0923 12:58:22.174293  682373 default_sa.go:34] waiting for default service account to be created ...
	I0923 12:58:22.355843  682373 request.go:632] Waited for 181.449658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/default/serviceaccounts
	I0923 12:58:22.355909  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/default/serviceaccounts
	I0923 12:58:22.355914  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:22.355922  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:22.355927  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:22.360440  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:22.360699  682373 default_sa.go:45] found service account: "default"
	I0923 12:58:22.360716  682373 default_sa.go:55] duration metric: took 186.414512ms for default service account to be created ...
	I0923 12:58:22.360725  682373 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 12:58:22.555206  682373 request.go:632] Waited for 194.405433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:58:22.555295  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:58:22.555301  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:22.555308  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:22.555316  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:22.560454  682373 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:58:22.566018  682373 system_pods.go:86] 17 kube-system pods found
	I0923 12:58:22.566047  682373 system_pods.go:89] "coredns-7c65d6cfc9-6g9x2" [af485e47-0e78-483e-8f35-a7a4ab53f014] Running
	I0923 12:58:22.566053  682373 system_pods.go:89] "coredns-7c65d6cfc9-txcxz" [e6da5f25-f232-4649-9801-f3577210ea2e] Running
	I0923 12:58:22.566057  682373 system_pods.go:89] "etcd-ha-097312" [7f27c05d-176f-4397-8966-a2cc29556265] Running
	I0923 12:58:22.566061  682373 system_pods.go:89] "etcd-ha-097312-m02" [50d4b55f-31d3-4351-8574-506bbc4167d6] Running
	I0923 12:58:22.566064  682373 system_pods.go:89] "kindnet-hcclj" [0e57c02a-6f9f-4829-9838-6bed660540a4] Running
	I0923 12:58:22.566068  682373 system_pods.go:89] "kindnet-j8l5t" [49216705-6e85-4b98-afbd-f4228b774321] Running
	I0923 12:58:22.566072  682373 system_pods.go:89] "kube-apiserver-ha-097312" [4b8954a1-188a-4734-8e79-eace293c35e9] Running
	I0923 12:58:22.566075  682373 system_pods.go:89] "kube-apiserver-ha-097312-m02" [6022c193-400e-4641-8c4d-d24f0ce3e6ea] Running
	I0923 12:58:22.566079  682373 system_pods.go:89] "kube-controller-manager-ha-097312" [c085db05-26f3-471b-baf1-f90cbfdacf19] Running
	I0923 12:58:22.566083  682373 system_pods.go:89] "kube-controller-manager-ha-097312-m02" [4cc903b8-c0c1-4ef7-9338-44af86be9280] Running
	I0923 12:58:22.566086  682373 system_pods.go:89] "kube-proxy-drj8m" [a1c5535e-7139-441f-9065-ef7d147582d2] Running
	I0923 12:58:22.566090  682373 system_pods.go:89] "kube-proxy-z6ss5" [7bff6204-a427-48c8-83a3-448ff1328b1b] Running
	I0923 12:58:22.566093  682373 system_pods.go:89] "kube-scheduler-ha-097312" [408ec8ae-eeca-4026-9582-45e7d209f09c] Running
	I0923 12:58:22.566097  682373 system_pods.go:89] "kube-scheduler-ha-097312-m02" [71e7793e-3d21-476a-84de-6fc84631e313] Running
	I0923 12:58:22.566100  682373 system_pods.go:89] "kube-vip-ha-097312" [b26dfdf8-fa4b-4822-a88c-fe7af53be81b] Running
	I0923 12:58:22.566103  682373 system_pods.go:89] "kube-vip-ha-097312-m02" [910ae281-c533-4aa6-acb0-c1b69dddd842] Running
	I0923 12:58:22.566106  682373 system_pods.go:89] "storage-provisioner" [0bbda806-091c-4e48-982a-296bbf03abd6] Running
	I0923 12:58:22.566112  682373 system_pods.go:126] duration metric: took 205.38119ms to wait for k8s-apps to be running ...
	I0923 12:58:22.566121  682373 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 12:58:22.566168  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:58:22.581419  682373 system_svc.go:56] duration metric: took 15.287038ms WaitForService to wait for kubelet
	I0923 12:58:22.581451  682373 kubeadm.go:582] duration metric: took 22.541987533s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:58:22.581470  682373 node_conditions.go:102] verifying NodePressure condition ...
	I0923 12:58:22.755938  682373 request.go:632] Waited for 174.364793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes
	I0923 12:58:22.756006  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes
	I0923 12:58:22.756011  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:22.756019  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:22.756027  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:22.760246  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:22.760965  682373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:58:22.760989  682373 node_conditions.go:123] node cpu capacity is 2
	I0923 12:58:22.761000  682373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:58:22.761004  682373 node_conditions.go:123] node cpu capacity is 2
	I0923 12:58:22.761010  682373 node_conditions.go:105] duration metric: took 179.533922ms to run NodePressure ...
	I0923 12:58:22.761032  682373 start.go:241] waiting for startup goroutines ...
	I0923 12:58:22.761061  682373 start.go:255] writing updated cluster config ...
	I0923 12:58:22.763224  682373 out.go:201] 
	I0923 12:58:22.764656  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:58:22.764766  682373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 12:58:22.766263  682373 out.go:177] * Starting "ha-097312-m03" control-plane node in "ha-097312" cluster
	I0923 12:58:22.767263  682373 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 12:58:22.767288  682373 cache.go:56] Caching tarball of preloaded images
	I0923 12:58:22.767425  682373 preload.go:172] Found /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 12:58:22.767438  682373 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 12:58:22.767549  682373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 12:58:22.767768  682373 start.go:360] acquireMachinesLock for ha-097312-m03: {Name:mka98570d4b4becad22300323f1f88e64743eec3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 12:58:22.767826  682373 start.go:364] duration metric: took 34.115µs to acquireMachinesLock for "ha-097312-m03"
	I0923 12:58:22.767850  682373 start.go:93] Provisioning new machine with config: &{Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekto
r-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:58:22.767994  682373 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0923 12:58:22.769439  682373 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 12:58:22.769539  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:58:22.769588  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:58:22.784952  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35991
	I0923 12:58:22.785373  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:58:22.785878  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:58:22.785904  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:58:22.786220  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:58:22.786438  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetMachineName
	I0923 12:58:22.786607  682373 main.go:141] libmachine: (ha-097312-m03) Calling .DriverName
	I0923 12:58:22.786798  682373 start.go:159] libmachine.API.Create for "ha-097312" (driver="kvm2")
	I0923 12:58:22.786843  682373 client.go:168] LocalClient.Create starting
	I0923 12:58:22.786909  682373 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem
	I0923 12:58:22.786967  682373 main.go:141] libmachine: Decoding PEM data...
	I0923 12:58:22.786989  682373 main.go:141] libmachine: Parsing certificate...
	I0923 12:58:22.787065  682373 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem
	I0923 12:58:22.787087  682373 main.go:141] libmachine: Decoding PEM data...
	I0923 12:58:22.787098  682373 main.go:141] libmachine: Parsing certificate...
	I0923 12:58:22.787116  682373 main.go:141] libmachine: Running pre-create checks...
	I0923 12:58:22.787123  682373 main.go:141] libmachine: (ha-097312-m03) Calling .PreCreateCheck
	I0923 12:58:22.787356  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetConfigRaw
	I0923 12:58:22.787880  682373 main.go:141] libmachine: Creating machine...
	I0923 12:58:22.787894  682373 main.go:141] libmachine: (ha-097312-m03) Calling .Create
	I0923 12:58:22.788064  682373 main.go:141] libmachine: (ha-097312-m03) Creating KVM machine...
	I0923 12:58:22.789249  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found existing default KVM network
	I0923 12:58:22.789434  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found existing private KVM network mk-ha-097312
	I0923 12:58:22.789576  682373 main.go:141] libmachine: (ha-097312-m03) Setting up store path in /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03 ...
	I0923 12:58:22.789598  682373 main.go:141] libmachine: (ha-097312-m03) Building disk image from file:///home/jenkins/minikube-integration/19690-662205/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 12:58:22.789697  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:22.789573  683157 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:58:22.789778  682373 main.go:141] libmachine: (ha-097312-m03) Downloading /home/jenkins/minikube-integration/19690-662205/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19690-662205/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 12:58:23.067488  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:23.067344  683157 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa...
	I0923 12:58:23.227591  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:23.227420  683157 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/ha-097312-m03.rawdisk...
	I0923 12:58:23.227631  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Writing magic tar header
	I0923 12:58:23.227668  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Writing SSH key tar header
	I0923 12:58:23.227688  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:23.227552  683157 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03 ...
	I0923 12:58:23.227701  682373 main.go:141] libmachine: (ha-097312-m03) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03 (perms=drwx------)
	I0923 12:58:23.227722  682373 main.go:141] libmachine: (ha-097312-m03) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube/machines (perms=drwxr-xr-x)
	I0923 12:58:23.227735  682373 main.go:141] libmachine: (ha-097312-m03) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube (perms=drwxr-xr-x)
	I0923 12:58:23.227750  682373 main.go:141] libmachine: (ha-097312-m03) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205 (perms=drwxrwxr-x)
	I0923 12:58:23.227770  682373 main.go:141] libmachine: (ha-097312-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 12:58:23.227784  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03
	I0923 12:58:23.227800  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube/machines
	I0923 12:58:23.227813  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:58:23.227827  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205
	I0923 12:58:23.227839  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 12:58:23.227850  682373 main.go:141] libmachine: (ha-097312-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 12:58:23.227887  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Checking permissions on dir: /home/jenkins
	I0923 12:58:23.227917  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Checking permissions on dir: /home
	I0923 12:58:23.227930  682373 main.go:141] libmachine: (ha-097312-m03) Creating domain...
	I0923 12:58:23.227949  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Skipping /home - not owner
	I0923 12:58:23.228646  682373 main.go:141] libmachine: (ha-097312-m03) define libvirt domain using xml: 
	I0923 12:58:23.228661  682373 main.go:141] libmachine: (ha-097312-m03) <domain type='kvm'>
	I0923 12:58:23.228669  682373 main.go:141] libmachine: (ha-097312-m03)   <name>ha-097312-m03</name>
	I0923 12:58:23.228688  682373 main.go:141] libmachine: (ha-097312-m03)   <memory unit='MiB'>2200</memory>
	I0923 12:58:23.228717  682373 main.go:141] libmachine: (ha-097312-m03)   <vcpu>2</vcpu>
	I0923 12:58:23.228738  682373 main.go:141] libmachine: (ha-097312-m03)   <features>
	I0923 12:58:23.228750  682373 main.go:141] libmachine: (ha-097312-m03)     <acpi/>
	I0923 12:58:23.228767  682373 main.go:141] libmachine: (ha-097312-m03)     <apic/>
	I0923 12:58:23.228781  682373 main.go:141] libmachine: (ha-097312-m03)     <pae/>
	I0923 12:58:23.228788  682373 main.go:141] libmachine: (ha-097312-m03)     
	I0923 12:58:23.228798  682373 main.go:141] libmachine: (ha-097312-m03)   </features>
	I0923 12:58:23.228813  682373 main.go:141] libmachine: (ha-097312-m03)   <cpu mode='host-passthrough'>
	I0923 12:58:23.228824  682373 main.go:141] libmachine: (ha-097312-m03)   
	I0923 12:58:23.228832  682373 main.go:141] libmachine: (ha-097312-m03)   </cpu>
	I0923 12:58:23.228843  682373 main.go:141] libmachine: (ha-097312-m03)   <os>
	I0923 12:58:23.228853  682373 main.go:141] libmachine: (ha-097312-m03)     <type>hvm</type>
	I0923 12:58:23.228866  682373 main.go:141] libmachine: (ha-097312-m03)     <boot dev='cdrom'/>
	I0923 12:58:23.228881  682373 main.go:141] libmachine: (ha-097312-m03)     <boot dev='hd'/>
	I0923 12:58:23.228893  682373 main.go:141] libmachine: (ha-097312-m03)     <bootmenu enable='no'/>
	I0923 12:58:23.228902  682373 main.go:141] libmachine: (ha-097312-m03)   </os>
	I0923 12:58:23.228911  682373 main.go:141] libmachine: (ha-097312-m03)   <devices>
	I0923 12:58:23.228922  682373 main.go:141] libmachine: (ha-097312-m03)     <disk type='file' device='cdrom'>
	I0923 12:58:23.228960  682373 main.go:141] libmachine: (ha-097312-m03)       <source file='/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/boot2docker.iso'/>
	I0923 12:58:23.228987  682373 main.go:141] libmachine: (ha-097312-m03)       <target dev='hdc' bus='scsi'/>
	I0923 12:58:23.228998  682373 main.go:141] libmachine: (ha-097312-m03)       <readonly/>
	I0923 12:58:23.229011  682373 main.go:141] libmachine: (ha-097312-m03)     </disk>
	I0923 12:58:23.229023  682373 main.go:141] libmachine: (ha-097312-m03)     <disk type='file' device='disk'>
	I0923 12:58:23.229035  682373 main.go:141] libmachine: (ha-097312-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 12:58:23.229050  682373 main.go:141] libmachine: (ha-097312-m03)       <source file='/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/ha-097312-m03.rawdisk'/>
	I0923 12:58:23.229060  682373 main.go:141] libmachine: (ha-097312-m03)       <target dev='hda' bus='virtio'/>
	I0923 12:58:23.229070  682373 main.go:141] libmachine: (ha-097312-m03)     </disk>
	I0923 12:58:23.229081  682373 main.go:141] libmachine: (ha-097312-m03)     <interface type='network'>
	I0923 12:58:23.229090  682373 main.go:141] libmachine: (ha-097312-m03)       <source network='mk-ha-097312'/>
	I0923 12:58:23.229114  682373 main.go:141] libmachine: (ha-097312-m03)       <model type='virtio'/>
	I0923 12:58:23.229140  682373 main.go:141] libmachine: (ha-097312-m03)     </interface>
	I0923 12:58:23.229160  682373 main.go:141] libmachine: (ha-097312-m03)     <interface type='network'>
	I0923 12:58:23.229172  682373 main.go:141] libmachine: (ha-097312-m03)       <source network='default'/>
	I0923 12:58:23.229186  682373 main.go:141] libmachine: (ha-097312-m03)       <model type='virtio'/>
	I0923 12:58:23.229197  682373 main.go:141] libmachine: (ha-097312-m03)     </interface>
	I0923 12:58:23.229203  682373 main.go:141] libmachine: (ha-097312-m03)     <serial type='pty'>
	I0923 12:58:23.229214  682373 main.go:141] libmachine: (ha-097312-m03)       <target port='0'/>
	I0923 12:58:23.229223  682373 main.go:141] libmachine: (ha-097312-m03)     </serial>
	I0923 12:58:23.229232  682373 main.go:141] libmachine: (ha-097312-m03)     <console type='pty'>
	I0923 12:58:23.229242  682373 main.go:141] libmachine: (ha-097312-m03)       <target type='serial' port='0'/>
	I0923 12:58:23.229252  682373 main.go:141] libmachine: (ha-097312-m03)     </console>
	I0923 12:58:23.229264  682373 main.go:141] libmachine: (ha-097312-m03)     <rng model='virtio'>
	I0923 12:58:23.229283  682373 main.go:141] libmachine: (ha-097312-m03)       <backend model='random'>/dev/random</backend>
	I0923 12:58:23.229301  682373 main.go:141] libmachine: (ha-097312-m03)     </rng>
	I0923 12:58:23.229309  682373 main.go:141] libmachine: (ha-097312-m03)     
	I0923 12:58:23.229315  682373 main.go:141] libmachine: (ha-097312-m03)     
	I0923 12:58:23.229321  682373 main.go:141] libmachine: (ha-097312-m03)   </devices>
	I0923 12:58:23.229324  682373 main.go:141] libmachine: (ha-097312-m03) </domain>
	I0923 12:58:23.229331  682373 main.go:141] libmachine: (ha-097312-m03) 
	I0923 12:58:23.236443  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:ba:f1:b5 in network default
	I0923 12:58:23.237006  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:23.237021  682373 main.go:141] libmachine: (ha-097312-m03) Ensuring networks are active...
	I0923 12:58:23.237857  682373 main.go:141] libmachine: (ha-097312-m03) Ensuring network default is active
	I0923 12:58:23.238229  682373 main.go:141] libmachine: (ha-097312-m03) Ensuring network mk-ha-097312 is active
	I0923 12:58:23.238611  682373 main.go:141] libmachine: (ha-097312-m03) Getting domain xml...
	I0923 12:58:23.239268  682373 main.go:141] libmachine: (ha-097312-m03) Creating domain...
	I0923 12:58:24.490717  682373 main.go:141] libmachine: (ha-097312-m03) Waiting to get IP...
	I0923 12:58:24.491571  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:24.492070  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:24.492095  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:24.492045  683157 retry.go:31] will retry after 248.750792ms: waiting for machine to come up
	I0923 12:58:24.742884  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:24.743526  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:24.743556  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:24.743474  683157 retry.go:31] will retry after 255.093938ms: waiting for machine to come up
	I0923 12:58:24.999946  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:25.000409  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:25.000437  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:25.000354  683157 retry.go:31] will retry after 366.076555ms: waiting for machine to come up
	I0923 12:58:25.367854  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:25.368400  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:25.368423  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:25.368345  683157 retry.go:31] will retry after 602.474157ms: waiting for machine to come up
	I0923 12:58:25.972258  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:25.972737  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:25.972759  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:25.972695  683157 retry.go:31] will retry after 694.585684ms: waiting for machine to come up
	I0923 12:58:26.668534  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:26.668902  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:26.668929  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:26.668869  683157 retry.go:31] will retry after 679.770142ms: waiting for machine to come up
	I0923 12:58:27.350837  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:27.351322  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:27.351348  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:27.351244  683157 retry.go:31] will retry after 724.740855ms: waiting for machine to come up
	I0923 12:58:28.077164  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:28.077637  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:28.077666  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:28.077575  683157 retry.go:31] will retry after 928.712628ms: waiting for machine to come up
	I0923 12:58:29.008154  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:29.008550  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:29.008579  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:29.008504  683157 retry.go:31] will retry after 1.450407892s: waiting for machine to come up
	I0923 12:58:30.461271  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:30.461634  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:30.461657  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:30.461609  683157 retry.go:31] will retry after 1.972612983s: waiting for machine to come up
	I0923 12:58:32.435439  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:32.435994  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:32.436026  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:32.435936  683157 retry.go:31] will retry after 2.428412852s: waiting for machine to come up
	I0923 12:58:34.866973  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:34.867442  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:34.867469  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:34.867396  683157 retry.go:31] will retry after 3.321760424s: waiting for machine to come up
	I0923 12:58:38.190761  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:38.191232  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:38.191259  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:38.191169  683157 retry.go:31] will retry after 3.240294118s: waiting for machine to come up
	I0923 12:58:41.435372  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:41.435812  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:41.435833  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:41.435772  683157 retry.go:31] will retry after 4.450333931s: waiting for machine to come up
	I0923 12:58:45.888567  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:45.889089  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has current primary IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:45.889129  682373 main.go:141] libmachine: (ha-097312-m03) Found IP for machine: 192.168.39.174
	I0923 12:58:45.889152  682373 main.go:141] libmachine: (ha-097312-m03) Reserving static IP address...
	I0923 12:58:45.889591  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find host DHCP lease matching {name: "ha-097312-m03", mac: "52:54:00:39:fc:65", ip: "192.168.39.174"} in network mk-ha-097312
	I0923 12:58:45.977147  682373 main.go:141] libmachine: (ha-097312-m03) Reserved static IP address: 192.168.39.174
	I0923 12:58:45.977177  682373 main.go:141] libmachine: (ha-097312-m03) Waiting for SSH to be available...
	I0923 12:58:45.977199  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Getting to WaitForSSH function...
	I0923 12:58:45.980053  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:45.980585  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312
	I0923 12:58:45.980626  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find defined IP address of network mk-ha-097312 interface with MAC address 52:54:00:39:fc:65
	I0923 12:58:45.980767  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Using SSH client type: external
	I0923 12:58:45.980803  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa (-rw-------)
	I0923 12:58:45.980837  682373 main.go:141] libmachine: (ha-097312-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 12:58:45.980856  682373 main.go:141] libmachine: (ha-097312-m03) DBG | About to run SSH command:
	I0923 12:58:45.980901  682373 main.go:141] libmachine: (ha-097312-m03) DBG | exit 0
	I0923 12:58:45.984924  682373 main.go:141] libmachine: (ha-097312-m03) DBG | SSH cmd err, output: exit status 255: 
	I0923 12:58:45.984953  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0923 12:58:45.984969  682373 main.go:141] libmachine: (ha-097312-m03) DBG | command : exit 0
	I0923 12:58:45.984980  682373 main.go:141] libmachine: (ha-097312-m03) DBG | err     : exit status 255
	I0923 12:58:45.984992  682373 main.go:141] libmachine: (ha-097312-m03) DBG | output  : 
	I0923 12:58:48.985305  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Getting to WaitForSSH function...
	I0923 12:58:48.988493  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:48.989086  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:48.989132  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:48.989359  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Using SSH client type: external
	I0923 12:58:48.989374  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa (-rw-------)
	I0923 12:58:48.989402  682373 main.go:141] libmachine: (ha-097312-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 12:58:48.989422  682373 main.go:141] libmachine: (ha-097312-m03) DBG | About to run SSH command:
	I0923 12:58:48.989477  682373 main.go:141] libmachine: (ha-097312-m03) DBG | exit 0
	I0923 12:58:49.118512  682373 main.go:141] libmachine: (ha-097312-m03) DBG | SSH cmd err, output: <nil>: 
	I0923 12:58:49.118822  682373 main.go:141] libmachine: (ha-097312-m03) KVM machine creation complete!
	I0923 12:58:49.119172  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetConfigRaw
	I0923 12:58:49.119782  682373 main.go:141] libmachine: (ha-097312-m03) Calling .DriverName
	I0923 12:58:49.119996  682373 main.go:141] libmachine: (ha-097312-m03) Calling .DriverName
	I0923 12:58:49.120225  682373 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 12:58:49.120260  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetState
	I0923 12:58:49.121499  682373 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 12:58:49.121514  682373 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 12:58:49.121519  682373 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 12:58:49.121524  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:49.124296  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.124870  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:49.124900  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.125084  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:49.125266  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:49.125423  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:49.125561  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:49.125760  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:58:49.126112  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0923 12:58:49.126128  682373 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 12:58:49.237975  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:58:49.238009  682373 main.go:141] libmachine: Detecting the provisioner...
	I0923 12:58:49.238020  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:49.241019  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.241453  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:49.241483  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.241651  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:49.241948  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:49.242157  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:49.242344  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:49.242559  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:58:49.242800  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0923 12:58:49.242816  682373 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 12:58:49.358902  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 12:58:49.358998  682373 main.go:141] libmachine: found compatible host: buildroot
	I0923 12:58:49.359008  682373 main.go:141] libmachine: Provisioning with buildroot...
	I0923 12:58:49.359016  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetMachineName
	I0923 12:58:49.359321  682373 buildroot.go:166] provisioning hostname "ha-097312-m03"
	I0923 12:58:49.359351  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetMachineName
	I0923 12:58:49.359578  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:49.362575  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.363012  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:49.363043  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.363307  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:49.363499  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:49.363671  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:49.363837  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:49.363993  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:58:49.364183  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0923 12:58:49.364200  682373 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-097312-m03 && echo "ha-097312-m03" | sudo tee /etc/hostname
	I0923 12:58:49.489492  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-097312-m03
	
	I0923 12:58:49.489526  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:49.492826  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.493233  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:49.493269  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.493628  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:49.493912  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:49.494119  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:49.494303  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:49.494519  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:58:49.494751  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0923 12:58:49.494771  682373 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-097312-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-097312-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-097312-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:58:49.623370  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:58:49.623402  682373 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19690-662205/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-662205/.minikube}
	I0923 12:58:49.623425  682373 buildroot.go:174] setting up certificates
	I0923 12:58:49.623436  682373 provision.go:84] configureAuth start
	I0923 12:58:49.623450  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetMachineName
	I0923 12:58:49.623804  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetIP
	I0923 12:58:49.626789  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.627251  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:49.627282  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.627473  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:49.630844  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.631265  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:49.631296  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.631526  682373 provision.go:143] copyHostCerts
	I0923 12:58:49.631561  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 12:58:49.631598  682373 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem, removing ...
	I0923 12:58:49.631607  682373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 12:58:49.631691  682373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem (1082 bytes)
	I0923 12:58:49.631792  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 12:58:49.631821  682373 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem, removing ...
	I0923 12:58:49.631827  682373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 12:58:49.631868  682373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem (1123 bytes)
	I0923 12:58:49.631937  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 12:58:49.631962  682373 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem, removing ...
	I0923 12:58:49.631969  682373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 12:58:49.632010  682373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem (1675 bytes)
	I0923 12:58:49.632096  682373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem org=jenkins.ha-097312-m03 san=[127.0.0.1 192.168.39.174 ha-097312-m03 localhost minikube]
	I0923 12:58:49.828110  682373 provision.go:177] copyRemoteCerts
	I0923 12:58:49.828198  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:58:49.828227  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:49.830911  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.831302  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:49.831336  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.831594  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:49.831831  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:49.832077  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:49.832238  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa Username:docker}
	I0923 12:58:49.921694  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 12:58:49.921777  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 12:58:49.946275  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 12:58:49.946377  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 12:58:49.972209  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 12:58:49.972329  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 12:58:49.998142  682373 provision.go:87] duration metric: took 374.691465ms to configureAuth
	I0923 12:58:49.998176  682373 buildroot.go:189] setting minikube options for container-runtime
	I0923 12:58:49.998394  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:58:49.998468  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:50.001457  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.001907  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:50.002003  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.002101  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:50.002332  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:50.002519  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:50.002830  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:50.003058  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:58:50.003274  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0923 12:58:50.003290  682373 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 12:58:50.239197  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 12:58:50.239229  682373 main.go:141] libmachine: Checking connection to Docker...
	I0923 12:58:50.239238  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetURL
	I0923 12:58:50.240570  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Using libvirt version 6000000
	I0923 12:58:50.243373  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.243723  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:50.243750  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.243998  682373 main.go:141] libmachine: Docker is up and running!
	I0923 12:58:50.244012  682373 main.go:141] libmachine: Reticulating splines...
	I0923 12:58:50.244021  682373 client.go:171] duration metric: took 27.457166675s to LocalClient.Create
	I0923 12:58:50.244048  682373 start.go:167] duration metric: took 27.457253634s to libmachine.API.Create "ha-097312"
	I0923 12:58:50.244058  682373 start.go:293] postStartSetup for "ha-097312-m03" (driver="kvm2")
	I0923 12:58:50.244067  682373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:58:50.244084  682373 main.go:141] libmachine: (ha-097312-m03) Calling .DriverName
	I0923 12:58:50.244341  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:58:50.244373  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:50.247177  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.247500  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:50.247521  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.247754  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:50.247951  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:50.248097  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:50.248197  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa Username:docker}
	I0923 12:58:50.333384  682373 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:58:50.338046  682373 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 12:58:50.338080  682373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/addons for local assets ...
	I0923 12:58:50.338170  682373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/files for local assets ...
	I0923 12:58:50.338267  682373 filesync.go:149] local asset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> 6694472.pem in /etc/ssl/certs
	I0923 12:58:50.338282  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> /etc/ssl/certs/6694472.pem
	I0923 12:58:50.338392  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 12:58:50.348354  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 12:58:50.372707  682373 start.go:296] duration metric: took 128.633991ms for postStartSetup
	I0923 12:58:50.372762  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetConfigRaw
	I0923 12:58:50.373426  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetIP
	I0923 12:58:50.376697  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.377173  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:50.377211  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.377593  682373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 12:58:50.377873  682373 start.go:128] duration metric: took 27.609858816s to createHost
	I0923 12:58:50.377907  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:50.380411  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.380907  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:50.380940  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.381160  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:50.381382  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:50.381590  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:50.381776  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:50.381976  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:58:50.382153  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0923 12:58:50.382163  682373 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 12:58:50.503140  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727096330.482204055
	
	I0923 12:58:50.503171  682373 fix.go:216] guest clock: 1727096330.482204055
	I0923 12:58:50.503182  682373 fix.go:229] Guest: 2024-09-23 12:58:50.482204055 +0000 UTC Remote: 2024-09-23 12:58:50.377890431 +0000 UTC m=+148.586385508 (delta=104.313624ms)
	I0923 12:58:50.503201  682373 fix.go:200] guest clock delta is within tolerance: 104.313624ms
	I0923 12:58:50.503207  682373 start.go:83] releasing machines lock for "ha-097312-m03", held for 27.735369252s
	I0923 12:58:50.503226  682373 main.go:141] libmachine: (ha-097312-m03) Calling .DriverName
	I0923 12:58:50.503498  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetIP
	I0923 12:58:50.506212  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.506688  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:50.506716  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.509222  682373 out.go:177] * Found network options:
	I0923 12:58:50.511101  682373 out.go:177]   - NO_PROXY=192.168.39.160,192.168.39.214
	W0923 12:58:50.512787  682373 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 12:58:50.512820  682373 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 12:58:50.512843  682373 main.go:141] libmachine: (ha-097312-m03) Calling .DriverName
	I0923 12:58:50.513731  682373 main.go:141] libmachine: (ha-097312-m03) Calling .DriverName
	I0923 12:58:50.513996  682373 main.go:141] libmachine: (ha-097312-m03) Calling .DriverName
	I0923 12:58:50.514102  682373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 12:58:50.514157  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	W0923 12:58:50.514279  682373 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 12:58:50.514318  682373 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 12:58:50.514393  682373 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 12:58:50.514415  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:50.517470  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.517502  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.517875  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:50.517907  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.517943  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:50.517962  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.518097  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:50.518178  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:50.518290  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:50.518373  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:50.518440  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:50.518566  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa Username:docker}
	I0923 12:58:50.518640  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:50.518802  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa Username:docker}
	I0923 12:58:50.765065  682373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 12:58:50.770910  682373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 12:58:50.770996  682373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 12:58:50.788872  682373 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 12:58:50.788920  682373 start.go:495] detecting cgroup driver to use...
	I0923 12:58:50.790888  682373 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 12:58:50.809431  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:58:50.825038  682373 docker.go:217] disabling cri-docker service (if available) ...
	I0923 12:58:50.825112  682373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 12:58:50.839523  682373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 12:58:50.854328  682373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 12:58:50.973330  682373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 12:58:51.114738  682373 docker.go:233] disabling docker service ...
	I0923 12:58:51.114816  682373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 12:58:51.129713  682373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 12:58:51.142863  682373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 12:58:51.295068  682373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 12:58:51.429699  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 12:58:51.445916  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:58:51.465380  682373 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 12:58:51.465444  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:58:51.476939  682373 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 12:58:51.477023  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:58:51.489669  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:58:51.501133  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:58:51.512757  682373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 12:58:51.524127  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:58:51.535054  682373 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:58:51.553239  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:58:51.565038  682373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 12:58:51.575598  682373 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 12:58:51.575670  682373 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 12:58:51.590718  682373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 12:58:51.601615  682373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:58:51.733836  682373 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 12:58:51.836194  682373 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 12:58:51.836276  682373 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 12:58:51.841212  682373 start.go:563] Will wait 60s for crictl version
	I0923 12:58:51.841301  682373 ssh_runner.go:195] Run: which crictl
	I0923 12:58:51.845296  682373 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 12:58:51.885994  682373 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 12:58:51.886074  682373 ssh_runner.go:195] Run: crio --version
	I0923 12:58:51.916461  682373 ssh_runner.go:195] Run: crio --version
	I0923 12:58:51.949216  682373 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 12:58:51.950816  682373 out.go:177]   - env NO_PROXY=192.168.39.160
	I0923 12:58:51.952396  682373 out.go:177]   - env NO_PROXY=192.168.39.160,192.168.39.214
	I0923 12:58:51.953858  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetIP
	I0923 12:58:51.957017  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:51.957485  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:51.957528  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:51.957807  682373 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 12:58:51.962319  682373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:58:51.975129  682373 mustload.go:65] Loading cluster: ha-097312
	I0923 12:58:51.975422  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:58:51.975727  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:58:51.975781  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:58:51.992675  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37443
	I0923 12:58:51.993145  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:58:51.993728  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:58:51.993763  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:58:51.994191  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:58:51.994434  682373 main.go:141] libmachine: (ha-097312) Calling .GetState
	I0923 12:58:51.996127  682373 host.go:66] Checking if "ha-097312" exists ...
	I0923 12:58:51.996593  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:58:51.996642  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:58:52.013141  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39117
	I0923 12:58:52.013710  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:58:52.014272  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:58:52.014297  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:58:52.014717  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:58:52.014958  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:58:52.015174  682373 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312 for IP: 192.168.39.174
	I0923 12:58:52.015189  682373 certs.go:194] generating shared ca certs ...
	I0923 12:58:52.015209  682373 certs.go:226] acquiring lock for ca certs: {Name:mk5f47b34d40554f07f6507fea971236e4735d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:58:52.015353  682373 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key
	I0923 12:58:52.015390  682373 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key
	I0923 12:58:52.015406  682373 certs.go:256] generating profile certs ...
	I0923 12:58:52.015485  682373 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.key
	I0923 12:58:52.015512  682373 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.480c46ec
	I0923 12:58:52.015531  682373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.480c46ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.160 192.168.39.214 192.168.39.174 192.168.39.254]
	I0923 12:58:52.141850  682373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.480c46ec ...
	I0923 12:58:52.141895  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.480c46ec: {Name:mkad80d48481e741ac2c369b88d81a886d1377dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:58:52.142113  682373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.480c46ec ...
	I0923 12:58:52.142128  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.480c46ec: {Name:mkc4802b23ce391f6bffaeddf1263168cc10992d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:58:52.142267  682373 certs.go:381] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.480c46ec -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt
	I0923 12:58:52.142420  682373 certs.go:385] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.480c46ec -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key
	I0923 12:58:52.142572  682373 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key
	I0923 12:58:52.142590  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 12:58:52.142609  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 12:58:52.142626  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 12:58:52.142641  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 12:58:52.142657  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 12:58:52.142672  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 12:58:52.142686  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 12:58:52.162055  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 12:58:52.162175  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem (1338 bytes)
	W0923 12:58:52.162222  682373 certs.go:480] ignoring /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447_empty.pem, impossibly tiny 0 bytes
	I0923 12:58:52.162262  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 12:58:52.162301  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem (1082 bytes)
	I0923 12:58:52.162335  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem (1123 bytes)
	I0923 12:58:52.162366  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem (1675 bytes)
	I0923 12:58:52.162425  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 12:58:52.162463  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:58:52.162486  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem -> /usr/share/ca-certificates/669447.pem
	I0923 12:58:52.162507  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> /usr/share/ca-certificates/6694472.pem
	I0923 12:58:52.162554  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:58:52.165353  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:58:52.165846  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:58:52.165879  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:58:52.166095  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:58:52.166330  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:58:52.166495  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:58:52.166657  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:58:52.246349  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0923 12:58:52.251941  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0923 12:58:52.264760  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0923 12:58:52.269374  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0923 12:58:52.280997  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0923 12:58:52.286014  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0923 12:58:52.298212  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0923 12:58:52.302755  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0923 12:58:52.314763  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0923 12:58:52.319431  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0923 12:58:52.330709  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0923 12:58:52.335071  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1671 bytes)
	I0923 12:58:52.347748  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 12:58:52.374394  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 12:58:52.402200  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 12:58:52.428792  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 12:58:52.453080  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0923 12:58:52.477297  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 12:58:52.502367  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 12:58:52.527508  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 12:58:52.552924  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 12:58:52.577615  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem --> /usr/share/ca-certificates/669447.pem (1338 bytes)
	I0923 12:58:52.602992  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /usr/share/ca-certificates/6694472.pem (1708 bytes)
	I0923 12:58:52.628751  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0923 12:58:52.648794  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0923 12:58:52.665863  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0923 12:58:52.683590  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0923 12:58:52.703077  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0923 12:58:52.721135  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1671 bytes)
	I0923 12:58:52.738608  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0923 12:58:52.756580  682373 ssh_runner.go:195] Run: openssl version
	I0923 12:58:52.762277  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 12:58:52.773072  682373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:58:52.778133  682373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 12:28 /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:58:52.778215  682373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:58:52.784053  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 12:58:52.795445  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669447.pem && ln -fs /usr/share/ca-certificates/669447.pem /etc/ssl/certs/669447.pem"
	I0923 12:58:52.806223  682373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669447.pem
	I0923 12:58:52.811080  682373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 12:47 /usr/share/ca-certificates/669447.pem
	I0923 12:58:52.811155  682373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669447.pem
	I0923 12:58:52.817004  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/669447.pem /etc/ssl/certs/51391683.0"
	I0923 12:58:52.828392  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6694472.pem && ln -fs /usr/share/ca-certificates/6694472.pem /etc/ssl/certs/6694472.pem"
	I0923 12:58:52.839455  682373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6694472.pem
	I0923 12:58:52.844434  682373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 12:47 /usr/share/ca-certificates/6694472.pem
	I0923 12:58:52.844501  682373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6694472.pem
	I0923 12:58:52.850419  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6694472.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 12:58:52.861972  682373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 12:58:52.866305  682373 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 12:58:52.866361  682373 kubeadm.go:934] updating node {m03 192.168.39.174 8443 v1.31.1 crio true true} ...
	I0923 12:58:52.866458  682373 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-097312-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 12:58:52.866484  682373 kube-vip.go:115] generating kube-vip config ...
	I0923 12:58:52.866520  682373 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 12:58:52.883666  682373 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 12:58:52.883745  682373 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0923 12:58:52.883809  682373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 12:58:52.895283  682373 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0923 12:58:52.895366  682373 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0923 12:58:52.905663  682373 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0923 12:58:52.905685  682373 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0923 12:58:52.905697  682373 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0923 12:58:52.905721  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 12:58:52.905750  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:58:52.905775  682373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 12:58:52.905694  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 12:58:52.905887  682373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 12:58:52.923501  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 12:58:52.923608  682373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 12:58:52.923612  682373 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0923 12:58:52.923649  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0923 12:58:52.923698  682373 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0923 12:58:52.923733  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0923 12:58:52.956744  682373 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0923 12:58:52.956812  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0923 12:58:54.045786  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0923 12:58:54.057369  682373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0923 12:58:54.076949  682373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 12:58:54.094827  682373 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0923 12:58:54.111645  682373 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0923 12:58:54.115795  682373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:58:54.129074  682373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:58:54.273605  682373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:58:54.295098  682373 host.go:66] Checking if "ha-097312" exists ...
	I0923 12:58:54.295704  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:58:54.295775  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:58:54.312297  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32969
	I0923 12:58:54.312791  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:58:54.313333  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:58:54.313355  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:58:54.313727  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:58:54.314023  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:58:54.314202  682373 start.go:317] joinCluster: &{Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:58:54.314373  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0923 12:58:54.314400  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:58:54.318048  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:58:54.318537  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:58:54.318569  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:58:54.318697  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:58:54.319009  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:58:54.319229  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:58:54.319353  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:58:54.524084  682373 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:58:54.524132  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ll3mfm.tdumzjzob0cezji3 --discovery-token-ca-cert-hash sha256:3fc29dc81bde6bbaef9ddbc91342eaa216189e2d814cc53e215aada75bebb1ff --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-097312-m03 --control-plane --apiserver-advertise-address=192.168.39.174 --apiserver-bind-port=8443"
	I0923 12:59:17.735394  682373 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ll3mfm.tdumzjzob0cezji3 --discovery-token-ca-cert-hash sha256:3fc29dc81bde6bbaef9ddbc91342eaa216189e2d814cc53e215aada75bebb1ff --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-097312-m03 --control-plane --apiserver-advertise-address=192.168.39.174 --apiserver-bind-port=8443": (23.211225253s)
	I0923 12:59:17.735437  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0923 12:59:18.305608  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-097312-m03 minikube.k8s.io/updated_at=2024_09_23T12_59_18_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=ha-097312 minikube.k8s.io/primary=false
	I0923 12:59:18.439539  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-097312-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0923 12:59:18.578555  682373 start.go:319] duration metric: took 24.264347271s to joinCluster
	I0923 12:59:18.578645  682373 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:59:18.578956  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:59:18.580466  682373 out.go:177] * Verifying Kubernetes components...
	I0923 12:59:18.581761  682373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:59:18.828388  682373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:59:18.856001  682373 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 12:59:18.856284  682373 kapi.go:59] client config for ha-097312: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.crt", KeyFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.key", CAFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0923 12:59:18.856351  682373 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.160:8443
	I0923 12:59:18.856639  682373 node_ready.go:35] waiting up to 6m0s for node "ha-097312-m03" to be "Ready" ...
	I0923 12:59:18.856738  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:18.856749  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:18.856757  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:18.856766  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:18.860204  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:19.357957  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:19.357992  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:19.358007  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:19.358015  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:19.361736  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:19.857898  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:19.857930  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:19.857938  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:19.857944  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:19.862012  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:20.356893  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:20.356921  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:20.356930  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:20.356934  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:20.363054  682373 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:59:20.857559  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:20.857592  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:20.857605  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:20.857610  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:20.861005  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:20.862362  682373 node_ready.go:53] node "ha-097312-m03" has status "Ready":"False"
	I0923 12:59:21.357690  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:21.357715  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:21.357724  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:21.357728  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:21.361111  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:21.857622  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:21.857650  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:21.857662  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:21.857666  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:21.861308  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:22.357805  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:22.357838  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:22.357852  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:22.357857  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:22.362010  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:22.856839  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:22.856862  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:22.856870  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:22.856876  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:22.860508  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:23.356920  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:23.356945  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:23.356954  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:23.356958  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:23.361117  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:23.361903  682373 node_ready.go:53] node "ha-097312-m03" has status "Ready":"False"
	I0923 12:59:23.857041  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:23.857068  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:23.857080  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:23.857085  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:23.860533  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:24.357315  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:24.357339  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:24.357347  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:24.357351  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:24.361517  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:24.857855  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:24.857884  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:24.857895  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:24.857900  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:24.861499  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:25.357580  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:25.357619  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:25.357634  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:25.357642  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:25.361466  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:25.362062  682373 node_ready.go:53] node "ha-097312-m03" has status "Ready":"False"
	I0923 12:59:25.856889  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:25.856972  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:25.856988  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:25.856995  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:25.864725  682373 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:59:26.357753  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:26.357775  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:26.357783  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:26.357788  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:26.361700  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:26.857569  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:26.857596  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:26.857606  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:26.857610  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:26.861224  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:27.357961  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:27.357993  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:27.358004  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:27.358010  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:27.361578  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:27.362220  682373 node_ready.go:53] node "ha-097312-m03" has status "Ready":"False"
	I0923 12:59:27.857445  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:27.857476  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:27.857488  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:27.857492  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:27.860961  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:28.356947  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:28.356973  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:28.356982  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:28.356986  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:28.360616  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:28.857670  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:28.857696  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:28.857705  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:28.857709  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:28.861424  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:29.357678  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:29.357701  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:29.357710  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:29.357715  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:29.361197  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:29.857149  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:29.857176  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:29.857184  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:29.857190  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:29.861121  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:29.862064  682373 node_ready.go:53] node "ha-097312-m03" has status "Ready":"False"
	I0923 12:59:30.357260  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:30.357288  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:30.357300  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:30.357308  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:30.360825  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:30.857554  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:30.857588  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:30.857601  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:30.857607  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:30.862056  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:31.357693  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:31.357719  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:31.357729  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:31.357745  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:31.361364  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:31.857735  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:31.857763  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:31.857772  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:31.857777  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:31.861563  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:31.862191  682373 node_ready.go:53] node "ha-097312-m03" has status "Ready":"False"
	I0923 12:59:32.357163  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:32.357191  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:32.357201  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:32.357207  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:32.360747  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:32.857730  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:32.857757  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:32.857766  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:32.857770  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:32.861363  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:33.357472  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:33.357507  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:33.357516  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:33.357521  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:33.361140  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:33.857033  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:33.857060  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:33.857069  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:33.857073  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:33.860438  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:34.357801  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:34.357841  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:34.357852  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:34.357857  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:34.361712  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:34.362366  682373 node_ready.go:53] node "ha-097312-m03" has status "Ready":"False"
	I0923 12:59:34.857887  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:34.857914  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:34.857924  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:34.857929  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:34.861889  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:35.357641  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:35.357673  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:35.357745  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:35.357754  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:35.362328  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:35.856847  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:35.856871  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:35.856879  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:35.856884  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:35.860452  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:36.357570  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:36.357596  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.357604  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.357608  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.360898  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:36.361411  682373 node_ready.go:49] node "ha-097312-m03" has status "Ready":"True"
	I0923 12:59:36.361434  682373 node_ready.go:38] duration metric: took 17.504775714s for node "ha-097312-m03" to be "Ready" ...
	I0923 12:59:36.361446  682373 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:59:36.361531  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:59:36.361549  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.361557  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.361564  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.367567  682373 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:59:36.374612  682373 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6g9x2" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.374726  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6g9x2
	I0923 12:59:36.374738  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.374750  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.374756  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.377869  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:36.378692  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:36.378712  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.378724  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.378729  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.381742  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:59:36.382472  682373 pod_ready.go:93] pod "coredns-7c65d6cfc9-6g9x2" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:36.382491  682373 pod_ready.go:82] duration metric: took 7.850172ms for pod "coredns-7c65d6cfc9-6g9x2" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.382500  682373 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-txcxz" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.382562  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-txcxz
	I0923 12:59:36.382569  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.382577  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.382582  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.385403  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:59:36.386115  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:36.386131  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.386138  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.386142  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.388676  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:59:36.389107  682373 pod_ready.go:93] pod "coredns-7c65d6cfc9-txcxz" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:36.389124  682373 pod_ready.go:82] duration metric: took 6.617983ms for pod "coredns-7c65d6cfc9-txcxz" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.389133  682373 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.389188  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/etcd-ha-097312
	I0923 12:59:36.389195  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.389202  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.389208  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.391701  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:59:36.392175  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:36.392190  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.392198  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.392201  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.394837  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:59:36.395206  682373 pod_ready.go:93] pod "etcd-ha-097312" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:36.395227  682373 pod_ready.go:82] duration metric: took 6.08706ms for pod "etcd-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.395247  682373 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.395320  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/etcd-ha-097312-m02
	I0923 12:59:36.395330  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.395337  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.395340  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.398083  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:59:36.398586  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:59:36.398601  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.398608  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.398611  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.401154  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:59:36.401531  682373 pod_ready.go:93] pod "etcd-ha-097312-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:36.401548  682373 pod_ready.go:82] duration metric: took 6.293178ms for pod "etcd-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.401558  682373 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-097312-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.557912  682373 request.go:632] Waited for 156.279648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/etcd-ha-097312-m03
	I0923 12:59:36.558018  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/etcd-ha-097312-m03
	I0923 12:59:36.558029  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.558039  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.558047  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.561558  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:36.757644  682373 request.go:632] Waited for 194.999965ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:36.757715  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:36.757723  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.757735  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.757740  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.761054  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:36.761940  682373 pod_ready.go:93] pod "etcd-ha-097312-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:36.761961  682373 pod_ready.go:82] duration metric: took 360.394832ms for pod "etcd-ha-097312-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.761980  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.958288  682373 request.go:632] Waited for 196.158494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312
	I0923 12:59:36.958372  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312
	I0923 12:59:36.958380  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.958392  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.958398  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.962196  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:37.157878  682373 request.go:632] Waited for 194.88858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:37.157969  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:37.157982  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:37.157994  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:37.158002  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:37.161325  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:37.162218  682373 pod_ready.go:93] pod "kube-apiserver-ha-097312" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:37.162262  682373 pod_ready.go:82] duration metric: took 400.255775ms for pod "kube-apiserver-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:37.162271  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:37.358381  682373 request.go:632] Waited for 196.017645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312-m02
	I0923 12:59:37.358481  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312-m02
	I0923 12:59:37.358490  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:37.358512  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:37.358538  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:37.362068  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:37.558164  682373 request.go:632] Waited for 195.3848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:59:37.558235  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:59:37.558245  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:37.558256  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:37.558264  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:37.563780  682373 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:59:37.564272  682373 pod_ready.go:93] pod "kube-apiserver-ha-097312-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:37.564295  682373 pod_ready.go:82] duration metric: took 402.016943ms for pod "kube-apiserver-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:37.564305  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-097312-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:37.757786  682373 request.go:632] Waited for 193.39104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312-m03
	I0923 12:59:37.757874  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312-m03
	I0923 12:59:37.757881  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:37.757890  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:37.757897  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:37.762281  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:37.958642  682373 request.go:632] Waited for 195.351711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:37.958724  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:37.958731  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:37.958741  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:37.958751  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:37.963464  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:37.964071  682373 pod_ready.go:93] pod "kube-apiserver-ha-097312-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:37.964093  682373 pod_ready.go:82] duration metric: took 399.781684ms for pod "kube-apiserver-ha-097312-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:37.964104  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:38.158303  682373 request.go:632] Waited for 194.104315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312
	I0923 12:59:38.158371  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312
	I0923 12:59:38.158377  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:38.158385  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:38.158391  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:38.161516  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:38.358608  682373 request.go:632] Waited for 196.37901ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:38.358678  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:38.358683  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:38.358693  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:38.358707  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:38.362309  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:38.362758  682373 pod_ready.go:93] pod "kube-controller-manager-ha-097312" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:38.362779  682373 pod_ready.go:82] duration metric: took 398.667788ms for pod "kube-controller-manager-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:38.362790  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:38.557916  682373 request.go:632] Waited for 195.037752ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312-m02
	I0923 12:59:38.558039  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312-m02
	I0923 12:59:38.558049  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:38.558057  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:38.558064  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:38.563352  682373 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:59:38.758557  682373 request.go:632] Waited for 194.402691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:59:38.758625  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:59:38.758630  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:38.758637  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:38.758647  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:38.763501  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:38.764092  682373 pod_ready.go:93] pod "kube-controller-manager-ha-097312-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:38.764116  682373 pod_ready.go:82] duration metric: took 401.316143ms for pod "kube-controller-manager-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:38.764127  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-097312-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:38.958205  682373 request.go:632] Waited for 193.95149ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312-m03
	I0923 12:59:38.958318  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312-m03
	I0923 12:59:38.958330  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:38.958341  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:38.958349  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:38.962605  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:39.158615  682373 request.go:632] Waited for 195.29247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:39.158699  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:39.158709  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:39.158718  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:39.158721  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:39.162027  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:39.162535  682373 pod_ready.go:93] pod "kube-controller-manager-ha-097312-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:39.162561  682373 pod_ready.go:82] duration metric: took 398.425721ms for pod "kube-controller-manager-ha-097312-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:39.162572  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-drj8m" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:39.358164  682373 request.go:632] Waited for 195.510394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-drj8m
	I0923 12:59:39.358250  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-drj8m
	I0923 12:59:39.358257  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:39.358268  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:39.358277  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:39.361850  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:39.558199  682373 request.go:632] Waited for 195.364547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:39.558282  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:39.558297  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:39.558307  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:39.558313  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:39.561590  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:39.562130  682373 pod_ready.go:93] pod "kube-proxy-drj8m" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:39.562153  682373 pod_ready.go:82] duration metric: took 399.573676ms for pod "kube-proxy-drj8m" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:39.562166  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vs524" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:39.758184  682373 request.go:632] Waited for 195.937914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vs524
	I0923 12:59:39.758247  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vs524
	I0923 12:59:39.758252  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:39.758259  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:39.758265  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:39.761790  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:39.957921  682373 request.go:632] Waited for 195.366189ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:39.957991  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:39.958005  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:39.958013  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:39.958019  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:39.962060  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:39.962614  682373 pod_ready.go:93] pod "kube-proxy-vs524" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:39.962646  682373 pod_ready.go:82] duration metric: took 400.470478ms for pod "kube-proxy-vs524" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:39.962661  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z6ss5" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:40.158575  682373 request.go:632] Waited for 195.810945ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z6ss5
	I0923 12:59:40.158664  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z6ss5
	I0923 12:59:40.158676  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:40.158687  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:40.158696  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:40.161968  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:40.358036  682373 request.go:632] Waited for 195.378024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:59:40.358107  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:59:40.358112  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:40.358120  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:40.358124  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:40.361928  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:40.362451  682373 pod_ready.go:93] pod "kube-proxy-z6ss5" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:40.362474  682373 pod_ready.go:82] duration metric: took 399.805025ms for pod "kube-proxy-z6ss5" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:40.362484  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:40.558528  682373 request.go:632] Waited for 195.950146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312
	I0923 12:59:40.558598  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312
	I0923 12:59:40.558612  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:40.558621  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:40.558625  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:40.562266  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:40.758487  682373 request.go:632] Waited for 195.542399ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:40.758572  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:40.758580  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:40.758591  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:40.758597  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:40.761825  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:40.762402  682373 pod_ready.go:93] pod "kube-scheduler-ha-097312" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:40.762425  682373 pod_ready.go:82] duration metric: took 399.935026ms for pod "kube-scheduler-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:40.762434  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:40.958691  682373 request.go:632] Waited for 196.142693ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312-m02
	I0923 12:59:40.958767  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312-m02
	I0923 12:59:40.958774  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:40.958782  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:40.958789  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:40.962833  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:41.157936  682373 request.go:632] Waited for 194.384412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:59:41.158022  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:59:41.158027  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:41.158035  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:41.158040  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:41.161682  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:41.162279  682373 pod_ready.go:93] pod "kube-scheduler-ha-097312-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:41.162303  682373 pod_ready.go:82] duration metric: took 399.860916ms for pod "kube-scheduler-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:41.162316  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-097312-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:41.358427  682373 request.go:632] Waited for 196.013005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312-m03
	I0923 12:59:41.358521  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312-m03
	I0923 12:59:41.358530  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:41.358541  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:41.358548  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:41.362666  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:41.557722  682373 request.go:632] Waited for 194.306447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:41.557785  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:41.557790  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:41.557799  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:41.557805  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:41.561165  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:41.561618  682373 pod_ready.go:93] pod "kube-scheduler-ha-097312-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:41.561638  682373 pod_ready.go:82] duration metric: took 399.3114ms for pod "kube-scheduler-ha-097312-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:41.561649  682373 pod_ready.go:39] duration metric: took 5.200192468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:59:41.561668  682373 api_server.go:52] waiting for apiserver process to appear ...
	I0923 12:59:41.561726  682373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:59:41.578487  682373 api_server.go:72] duration metric: took 22.999797093s to wait for apiserver process to appear ...
	I0923 12:59:41.578520  682373 api_server.go:88] waiting for apiserver healthz status ...
	I0923 12:59:41.578549  682373 api_server.go:253] Checking apiserver healthz at https://192.168.39.160:8443/healthz ...
	I0923 12:59:41.583195  682373 api_server.go:279] https://192.168.39.160:8443/healthz returned 200:
	ok
	I0923 12:59:41.583283  682373 round_trippers.go:463] GET https://192.168.39.160:8443/version
	I0923 12:59:41.583292  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:41.583300  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:41.583303  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:41.584184  682373 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0923 12:59:41.584348  682373 api_server.go:141] control plane version: v1.31.1
	I0923 12:59:41.584376  682373 api_server.go:131] duration metric: took 5.84872ms to wait for apiserver health ...
	I0923 12:59:41.584386  682373 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 12:59:41.757749  682373 request.go:632] Waited for 173.249304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:59:41.757819  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:59:41.757848  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:41.757861  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:41.757869  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:41.765026  682373 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:59:41.775103  682373 system_pods.go:59] 24 kube-system pods found
	I0923 12:59:41.775147  682373 system_pods.go:61] "coredns-7c65d6cfc9-6g9x2" [af485e47-0e78-483e-8f35-a7a4ab53f014] Running
	I0923 12:59:41.775153  682373 system_pods.go:61] "coredns-7c65d6cfc9-txcxz" [e6da5f25-f232-4649-9801-f3577210ea2e] Running
	I0923 12:59:41.775158  682373 system_pods.go:61] "etcd-ha-097312" [7f27c05d-176f-4397-8966-a2cc29556265] Running
	I0923 12:59:41.775162  682373 system_pods.go:61] "etcd-ha-097312-m02" [50d4b55f-31d3-4351-8574-506bbc4167d6] Running
	I0923 12:59:41.775166  682373 system_pods.go:61] "etcd-ha-097312-m03" [47812605-2ed5-49dc-acae-7b8ff115b1c5] Running
	I0923 12:59:41.775171  682373 system_pods.go:61] "kindnet-hcclj" [0e57c02a-6f9f-4829-9838-6bed660540a4] Running
	I0923 12:59:41.775176  682373 system_pods.go:61] "kindnet-j8l5t" [49216705-6e85-4b98-afbd-f4228b774321] Running
	I0923 12:59:41.775181  682373 system_pods.go:61] "kindnet-lcrdg" [fc7c4594-c83a-4254-a163-8f66b34c53c0] Running
	I0923 12:59:41.775186  682373 system_pods.go:61] "kube-apiserver-ha-097312" [4b8954a1-188a-4734-8e79-eace293c35e9] Running
	I0923 12:59:41.775191  682373 system_pods.go:61] "kube-apiserver-ha-097312-m02" [6022c193-400e-4641-8c4d-d24f0ce3e6ea] Running
	I0923 12:59:41.775195  682373 system_pods.go:61] "kube-apiserver-ha-097312-m03" [cfc94901-d0f5-4a59-a8d2-8841462a3166] Running
	I0923 12:59:41.775203  682373 system_pods.go:61] "kube-controller-manager-ha-097312" [c085db05-26f3-471b-baf1-f90cbfdacf19] Running
	I0923 12:59:41.775214  682373 system_pods.go:61] "kube-controller-manager-ha-097312-m02" [4cc903b8-c0c1-4ef7-9338-44af86be9280] Running
	I0923 12:59:41.775219  682373 system_pods.go:61] "kube-controller-manager-ha-097312-m03" [70886840-6967-4d3c-a0b7-e6448711e0cc] Running
	I0923 12:59:41.775224  682373 system_pods.go:61] "kube-proxy-drj8m" [a1c5535e-7139-441f-9065-ef7d147582d2] Running
	I0923 12:59:41.775249  682373 system_pods.go:61] "kube-proxy-vs524" [92738649-c52b-44d5-866b-8cda751a538c] Running
	I0923 12:59:41.775255  682373 system_pods.go:61] "kube-proxy-z6ss5" [7bff6204-a427-48c8-83a3-448ff1328b1b] Running
	I0923 12:59:41.775258  682373 system_pods.go:61] "kube-scheduler-ha-097312" [408ec8ae-eeca-4026-9582-45e7d209f09c] Running
	I0923 12:59:41.775264  682373 system_pods.go:61] "kube-scheduler-ha-097312-m02" [71e7793e-3d21-476a-84de-6fc84631e313] Running
	I0923 12:59:41.775268  682373 system_pods.go:61] "kube-scheduler-ha-097312-m03" [7811405d-6f57-440f-a9a2-178f2a094f61] Running
	I0923 12:59:41.775273  682373 system_pods.go:61] "kube-vip-ha-097312" [b26dfdf8-fa4b-4822-a88c-fe7af53be81b] Running
	I0923 12:59:41.775276  682373 system_pods.go:61] "kube-vip-ha-097312-m02" [910ae281-c533-4aa6-acb0-c1b69dddd842] Running
	I0923 12:59:41.775282  682373 system_pods.go:61] "kube-vip-ha-097312-m03" [1de093b7-e402-48af-ac83-09f59ffd213e] Running
	I0923 12:59:41.775287  682373 system_pods.go:61] "storage-provisioner" [0bbda806-091c-4e48-982a-296bbf03abd6] Running
	I0923 12:59:41.775297  682373 system_pods.go:74] duration metric: took 190.903005ms to wait for pod list to return data ...
	I0923 12:59:41.775310  682373 default_sa.go:34] waiting for default service account to be created ...
	I0923 12:59:41.957641  682373 request.go:632] Waited for 182.223415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/default/serviceaccounts
	I0923 12:59:41.957725  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/default/serviceaccounts
	I0923 12:59:41.957732  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:41.957741  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:41.957748  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:41.961638  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:41.961870  682373 default_sa.go:45] found service account: "default"
	I0923 12:59:41.961901  682373 default_sa.go:55] duration metric: took 186.579724ms for default service account to be created ...
	I0923 12:59:41.961914  682373 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 12:59:42.158106  682373 request.go:632] Waited for 196.090807ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:59:42.158184  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:59:42.158191  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:42.158202  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:42.158209  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:42.163268  682373 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:59:42.169516  682373 system_pods.go:86] 24 kube-system pods found
	I0923 12:59:42.169555  682373 system_pods.go:89] "coredns-7c65d6cfc9-6g9x2" [af485e47-0e78-483e-8f35-a7a4ab53f014] Running
	I0923 12:59:42.169562  682373 system_pods.go:89] "coredns-7c65d6cfc9-txcxz" [e6da5f25-f232-4649-9801-f3577210ea2e] Running
	I0923 12:59:42.169566  682373 system_pods.go:89] "etcd-ha-097312" [7f27c05d-176f-4397-8966-a2cc29556265] Running
	I0923 12:59:42.169570  682373 system_pods.go:89] "etcd-ha-097312-m02" [50d4b55f-31d3-4351-8574-506bbc4167d6] Running
	I0923 12:59:42.169574  682373 system_pods.go:89] "etcd-ha-097312-m03" [47812605-2ed5-49dc-acae-7b8ff115b1c5] Running
	I0923 12:59:42.169578  682373 system_pods.go:89] "kindnet-hcclj" [0e57c02a-6f9f-4829-9838-6bed660540a4] Running
	I0923 12:59:42.169582  682373 system_pods.go:89] "kindnet-j8l5t" [49216705-6e85-4b98-afbd-f4228b774321] Running
	I0923 12:59:42.169587  682373 system_pods.go:89] "kindnet-lcrdg" [fc7c4594-c83a-4254-a163-8f66b34c53c0] Running
	I0923 12:59:42.169596  682373 system_pods.go:89] "kube-apiserver-ha-097312" [4b8954a1-188a-4734-8e79-eace293c35e9] Running
	I0923 12:59:42.169603  682373 system_pods.go:89] "kube-apiserver-ha-097312-m02" [6022c193-400e-4641-8c4d-d24f0ce3e6ea] Running
	I0923 12:59:42.169609  682373 system_pods.go:89] "kube-apiserver-ha-097312-m03" [cfc94901-d0f5-4a59-a8d2-8841462a3166] Running
	I0923 12:59:42.169617  682373 system_pods.go:89] "kube-controller-manager-ha-097312" [c085db05-26f3-471b-baf1-f90cbfdacf19] Running
	I0923 12:59:42.169629  682373 system_pods.go:89] "kube-controller-manager-ha-097312-m02" [4cc903b8-c0c1-4ef7-9338-44af86be9280] Running
	I0923 12:59:42.169636  682373 system_pods.go:89] "kube-controller-manager-ha-097312-m03" [70886840-6967-4d3c-a0b7-e6448711e0cc] Running
	I0923 12:59:42.169643  682373 system_pods.go:89] "kube-proxy-drj8m" [a1c5535e-7139-441f-9065-ef7d147582d2] Running
	I0923 12:59:42.169653  682373 system_pods.go:89] "kube-proxy-vs524" [92738649-c52b-44d5-866b-8cda751a538c] Running
	I0923 12:59:42.169657  682373 system_pods.go:89] "kube-proxy-z6ss5" [7bff6204-a427-48c8-83a3-448ff1328b1b] Running
	I0923 12:59:42.169661  682373 system_pods.go:89] "kube-scheduler-ha-097312" [408ec8ae-eeca-4026-9582-45e7d209f09c] Running
	I0923 12:59:42.169665  682373 system_pods.go:89] "kube-scheduler-ha-097312-m02" [71e7793e-3d21-476a-84de-6fc84631e313] Running
	I0923 12:59:42.169669  682373 system_pods.go:89] "kube-scheduler-ha-097312-m03" [7811405d-6f57-440f-a9a2-178f2a094f61] Running
	I0923 12:59:42.169672  682373 system_pods.go:89] "kube-vip-ha-097312" [b26dfdf8-fa4b-4822-a88c-fe7af53be81b] Running
	I0923 12:59:42.169679  682373 system_pods.go:89] "kube-vip-ha-097312-m02" [910ae281-c533-4aa6-acb0-c1b69dddd842] Running
	I0923 12:59:42.169684  682373 system_pods.go:89] "kube-vip-ha-097312-m03" [1de093b7-e402-48af-ac83-09f59ffd213e] Running
	I0923 12:59:42.169687  682373 system_pods.go:89] "storage-provisioner" [0bbda806-091c-4e48-982a-296bbf03abd6] Running
	I0923 12:59:42.169694  682373 system_pods.go:126] duration metric: took 207.772669ms to wait for k8s-apps to be running ...
	I0923 12:59:42.169708  682373 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 12:59:42.169771  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:59:42.186008  682373 system_svc.go:56] duration metric: took 16.290747ms WaitForService to wait for kubelet
	I0923 12:59:42.186050  682373 kubeadm.go:582] duration metric: took 23.607368403s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:59:42.186083  682373 node_conditions.go:102] verifying NodePressure condition ...
	I0923 12:59:42.358541  682373 request.go:632] Waited for 172.350275ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes
	I0923 12:59:42.358620  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes
	I0923 12:59:42.358625  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:42.358634  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:42.358638  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:42.361922  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:42.362876  682373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:59:42.362900  682373 node_conditions.go:123] node cpu capacity is 2
	I0923 12:59:42.362911  682373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:59:42.362914  682373 node_conditions.go:123] node cpu capacity is 2
	I0923 12:59:42.362918  682373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:59:42.362921  682373 node_conditions.go:123] node cpu capacity is 2
	I0923 12:59:42.362925  682373 node_conditions.go:105] duration metric: took 176.836519ms to run NodePressure ...
	I0923 12:59:42.362937  682373 start.go:241] waiting for startup goroutines ...
	I0923 12:59:42.362958  682373 start.go:255] writing updated cluster config ...
	I0923 12:59:42.363261  682373 ssh_runner.go:195] Run: rm -f paused
	I0923 12:59:42.417533  682373 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 12:59:42.419577  682373 out.go:177] * Done! kubectl is now configured to use "ha-097312" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 23 13:03:32 ha-097312 crio[666]: time="2024-09-23 13:03:32.743128418Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096612743100781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=454615d1-7bf9-4fa0-a4e6-e6f8077b8442 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:03:32 ha-097312 crio[666]: time="2024-09-23 13:03:32.743680810Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=52dd506c-7594-4d10-9ae2-3a85cff3bc9f name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:32 ha-097312 crio[666]: time="2024-09-23 13:03:32.743744745Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=52dd506c-7594-4d10-9ae2-3a85cff3bc9f name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:32 ha-097312 crio[666]: time="2024-09-23 13:03:32.743982040Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0c8b3d3e1c9604dd8d7d45c15c2a91a759a62f04a047e5626d57a757a396bd4b,PodSandboxId:01a99cef826dda6f2b65d379c041e96505aa2085b58dd4630a3ae2c0052d503b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727096387328810156,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070d45bce8ff98c35a7d8c06328c902bd260bbcd49c6d8b65acf5f2fe3670f05,PodSandboxId:287ae69fbba66da4b73f16d080fbf336ffcfc42104571090400deb8b10a0a4f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727096240448358828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6494b72ca963ec5a21179322ce5a1a3cd2ecf6063d12290ea8c06659ede25828,PodSandboxId:09f40d2b506132af296453dc4125d2ff70d789a87f1da351ae25a90c863e1c5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096240450387241,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cead05960724ef0a7c164689c7f077c5173bf75483e09a02ea44bf3b5dde8cab,PodSandboxId:d6346e81a93e3ab149256d0f37fd69af6c44f91e6e6662b3720a7bd343554d66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096240372155642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e
78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03670fd92c8a80c9d88e88b722428ce8ea7ed15a32a25c8c4c948685c15fe41c,PodSandboxId:fa074de98ab0bb7558595bb7900fab097f2fa4cf091ae0c9ed5fd5c899cc2044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17270962
28373682156,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b6ad938698e107c07b01a67dcc4f6f6f2895a6b2ddc7a269056adab117c0ce,PodSandboxId:8efd7c52e41eb6dd5b30df6dc0b133cb2ffabe08abf473da0e79edcf137bc745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727096228199432737,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5095373416a8e45324449515c2fa18882a4b643648236860681c27f7f589bdb,PodSandboxId:a65df228e8bfd8d4d6a9b85c6cbab162a4a128e8612cbb781b68b21b0f017fe2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727096218413186299,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 517a285369c2d468692e1e5ab2e508d6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfbdbe2c35f63b185f28992c717601392287e693216d7332cfd0b4b6597c8ad,PodSandboxId:46a49b5018b58cc60ab2c080f685d00c187e33e4c7790af775ed5baf71aefdca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727096215629421014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c9e8fb5e944bc800446956248067c039e5c452de2651adf100841c5f062a431,PodSandboxId:e4cdc1cb583f42c1cf64e136ebe20075107963fc13da9144c568b67897e7e8a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727096215612519576,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c28bf3f4d80d4048804c687d1cec38aff92ff01ac7556fbe59fd2c73324b333,PodSandboxId:66109e91b1f789d247a6b16e21533a1c912ebdf0386ca6f2b2a221f5a873a754,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727096215567156911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476ad705f89683694506883a4ac379c2339d6097875e3a88c66a078cec041492,PodSandboxId:d5fd7dbc75ab3b9c7a6cdfac29a7ad6d6d093ed1004322d9f8640bbfe66c5388,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727096215548571609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=52dd506c-7594-4d10-9ae2-3a85cff3bc9f name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:32 ha-097312 crio[666]: time="2024-09-23 13:03:32.782780517Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5a7c4bfb-516d-48a0-b272-e373df43c5b4 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:03:32 ha-097312 crio[666]: time="2024-09-23 13:03:32.782855115Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5a7c4bfb-516d-48a0-b272-e373df43c5b4 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:03:32 ha-097312 crio[666]: time="2024-09-23 13:03:32.784453122Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0864ec0a-f858-4775-beb4-47284a9b4c6f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:03:32 ha-097312 crio[666]: time="2024-09-23 13:03:32.785280048Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096612785197443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0864ec0a-f858-4775-beb4-47284a9b4c6f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:03:32 ha-097312 crio[666]: time="2024-09-23 13:03:32.785836073Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5855608-4e52-43d1-81ba-198be044bba0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:32 ha-097312 crio[666]: time="2024-09-23 13:03:32.785895526Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f5855608-4e52-43d1-81ba-198be044bba0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:32 ha-097312 crio[666]: time="2024-09-23 13:03:32.786121204Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0c8b3d3e1c9604dd8d7d45c15c2a91a759a62f04a047e5626d57a757a396bd4b,PodSandboxId:01a99cef826dda6f2b65d379c041e96505aa2085b58dd4630a3ae2c0052d503b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727096387328810156,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070d45bce8ff98c35a7d8c06328c902bd260bbcd49c6d8b65acf5f2fe3670f05,PodSandboxId:287ae69fbba66da4b73f16d080fbf336ffcfc42104571090400deb8b10a0a4f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727096240448358828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6494b72ca963ec5a21179322ce5a1a3cd2ecf6063d12290ea8c06659ede25828,PodSandboxId:09f40d2b506132af296453dc4125d2ff70d789a87f1da351ae25a90c863e1c5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096240450387241,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cead05960724ef0a7c164689c7f077c5173bf75483e09a02ea44bf3b5dde8cab,PodSandboxId:d6346e81a93e3ab149256d0f37fd69af6c44f91e6e6662b3720a7bd343554d66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096240372155642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e
78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03670fd92c8a80c9d88e88b722428ce8ea7ed15a32a25c8c4c948685c15fe41c,PodSandboxId:fa074de98ab0bb7558595bb7900fab097f2fa4cf091ae0c9ed5fd5c899cc2044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17270962
28373682156,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b6ad938698e107c07b01a67dcc4f6f6f2895a6b2ddc7a269056adab117c0ce,PodSandboxId:8efd7c52e41eb6dd5b30df6dc0b133cb2ffabe08abf473da0e79edcf137bc745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727096228199432737,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5095373416a8e45324449515c2fa18882a4b643648236860681c27f7f589bdb,PodSandboxId:a65df228e8bfd8d4d6a9b85c6cbab162a4a128e8612cbb781b68b21b0f017fe2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727096218413186299,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 517a285369c2d468692e1e5ab2e508d6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfbdbe2c35f63b185f28992c717601392287e693216d7332cfd0b4b6597c8ad,PodSandboxId:46a49b5018b58cc60ab2c080f685d00c187e33e4c7790af775ed5baf71aefdca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727096215629421014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c9e8fb5e944bc800446956248067c039e5c452de2651adf100841c5f062a431,PodSandboxId:e4cdc1cb583f42c1cf64e136ebe20075107963fc13da9144c568b67897e7e8a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727096215612519576,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c28bf3f4d80d4048804c687d1cec38aff92ff01ac7556fbe59fd2c73324b333,PodSandboxId:66109e91b1f789d247a6b16e21533a1c912ebdf0386ca6f2b2a221f5a873a754,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727096215567156911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476ad705f89683694506883a4ac379c2339d6097875e3a88c66a078cec041492,PodSandboxId:d5fd7dbc75ab3b9c7a6cdfac29a7ad6d6d093ed1004322d9f8640bbfe66c5388,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727096215548571609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f5855608-4e52-43d1-81ba-198be044bba0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:32 ha-097312 crio[666]: time="2024-09-23 13:03:32.825007433Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=babca60f-e529-4de3-acba-d586d9669144 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:03:32 ha-097312 crio[666]: time="2024-09-23 13:03:32.825080971Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=babca60f-e529-4de3-acba-d586d9669144 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:03:32 ha-097312 crio[666]: time="2024-09-23 13:03:32.826735232Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=78cb9e75-6eeb-4889-a460-83c283b857de name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:03:32 ha-097312 crio[666]: time="2024-09-23 13:03:32.827137347Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096612827116569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78cb9e75-6eeb-4889-a460-83c283b857de name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:03:32 ha-097312 crio[666]: time="2024-09-23 13:03:32.827727689Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=70735cef-ebf2-46b1-ba57-07856d50cf09 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:32 ha-097312 crio[666]: time="2024-09-23 13:03:32.827795060Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=70735cef-ebf2-46b1-ba57-07856d50cf09 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:32 ha-097312 crio[666]: time="2024-09-23 13:03:32.828020704Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0c8b3d3e1c9604dd8d7d45c15c2a91a759a62f04a047e5626d57a757a396bd4b,PodSandboxId:01a99cef826dda6f2b65d379c041e96505aa2085b58dd4630a3ae2c0052d503b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727096387328810156,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070d45bce8ff98c35a7d8c06328c902bd260bbcd49c6d8b65acf5f2fe3670f05,PodSandboxId:287ae69fbba66da4b73f16d080fbf336ffcfc42104571090400deb8b10a0a4f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727096240448358828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6494b72ca963ec5a21179322ce5a1a3cd2ecf6063d12290ea8c06659ede25828,PodSandboxId:09f40d2b506132af296453dc4125d2ff70d789a87f1da351ae25a90c863e1c5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096240450387241,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cead05960724ef0a7c164689c7f077c5173bf75483e09a02ea44bf3b5dde8cab,PodSandboxId:d6346e81a93e3ab149256d0f37fd69af6c44f91e6e6662b3720a7bd343554d66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096240372155642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e
78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03670fd92c8a80c9d88e88b722428ce8ea7ed15a32a25c8c4c948685c15fe41c,PodSandboxId:fa074de98ab0bb7558595bb7900fab097f2fa4cf091ae0c9ed5fd5c899cc2044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17270962
28373682156,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b6ad938698e107c07b01a67dcc4f6f6f2895a6b2ddc7a269056adab117c0ce,PodSandboxId:8efd7c52e41eb6dd5b30df6dc0b133cb2ffabe08abf473da0e79edcf137bc745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727096228199432737,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5095373416a8e45324449515c2fa18882a4b643648236860681c27f7f589bdb,PodSandboxId:a65df228e8bfd8d4d6a9b85c6cbab162a4a128e8612cbb781b68b21b0f017fe2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727096218413186299,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 517a285369c2d468692e1e5ab2e508d6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfbdbe2c35f63b185f28992c717601392287e693216d7332cfd0b4b6597c8ad,PodSandboxId:46a49b5018b58cc60ab2c080f685d00c187e33e4c7790af775ed5baf71aefdca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727096215629421014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c9e8fb5e944bc800446956248067c039e5c452de2651adf100841c5f062a431,PodSandboxId:e4cdc1cb583f42c1cf64e136ebe20075107963fc13da9144c568b67897e7e8a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727096215612519576,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c28bf3f4d80d4048804c687d1cec38aff92ff01ac7556fbe59fd2c73324b333,PodSandboxId:66109e91b1f789d247a6b16e21533a1c912ebdf0386ca6f2b2a221f5a873a754,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727096215567156911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476ad705f89683694506883a4ac379c2339d6097875e3a88c66a078cec041492,PodSandboxId:d5fd7dbc75ab3b9c7a6cdfac29a7ad6d6d093ed1004322d9f8640bbfe66c5388,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727096215548571609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=70735cef-ebf2-46b1-ba57-07856d50cf09 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:32 ha-097312 crio[666]: time="2024-09-23 13:03:32.870334132Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=48859d74-6975-4b10-86ee-f9425a4156c2 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:03:32 ha-097312 crio[666]: time="2024-09-23 13:03:32.870422567Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=48859d74-6975-4b10-86ee-f9425a4156c2 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:03:32 ha-097312 crio[666]: time="2024-09-23 13:03:32.871316921Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24320f8f-67f2-42c0-9ba2-6b638422d0d6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:03:32 ha-097312 crio[666]: time="2024-09-23 13:03:32.871802873Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096612871777801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24320f8f-67f2-42c0-9ba2-6b638422d0d6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:03:32 ha-097312 crio[666]: time="2024-09-23 13:03:32.872283423Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d04e2f1-9ded-4abe-91aa-68ae1489bd0b name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:32 ha-097312 crio[666]: time="2024-09-23 13:03:32.872354089Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d04e2f1-9ded-4abe-91aa-68ae1489bd0b name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:32 ha-097312 crio[666]: time="2024-09-23 13:03:32.872685620Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0c8b3d3e1c9604dd8d7d45c15c2a91a759a62f04a047e5626d57a757a396bd4b,PodSandboxId:01a99cef826dda6f2b65d379c041e96505aa2085b58dd4630a3ae2c0052d503b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727096387328810156,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070d45bce8ff98c35a7d8c06328c902bd260bbcd49c6d8b65acf5f2fe3670f05,PodSandboxId:287ae69fbba66da4b73f16d080fbf336ffcfc42104571090400deb8b10a0a4f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727096240448358828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6494b72ca963ec5a21179322ce5a1a3cd2ecf6063d12290ea8c06659ede25828,PodSandboxId:09f40d2b506132af296453dc4125d2ff70d789a87f1da351ae25a90c863e1c5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096240450387241,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cead05960724ef0a7c164689c7f077c5173bf75483e09a02ea44bf3b5dde8cab,PodSandboxId:d6346e81a93e3ab149256d0f37fd69af6c44f91e6e6662b3720a7bd343554d66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096240372155642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e
78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03670fd92c8a80c9d88e88b722428ce8ea7ed15a32a25c8c4c948685c15fe41c,PodSandboxId:fa074de98ab0bb7558595bb7900fab097f2fa4cf091ae0c9ed5fd5c899cc2044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17270962
28373682156,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b6ad938698e107c07b01a67dcc4f6f6f2895a6b2ddc7a269056adab117c0ce,PodSandboxId:8efd7c52e41eb6dd5b30df6dc0b133cb2ffabe08abf473da0e79edcf137bc745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727096228199432737,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5095373416a8e45324449515c2fa18882a4b643648236860681c27f7f589bdb,PodSandboxId:a65df228e8bfd8d4d6a9b85c6cbab162a4a128e8612cbb781b68b21b0f017fe2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727096218413186299,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 517a285369c2d468692e1e5ab2e508d6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfbdbe2c35f63b185f28992c717601392287e693216d7332cfd0b4b6597c8ad,PodSandboxId:46a49b5018b58cc60ab2c080f685d00c187e33e4c7790af775ed5baf71aefdca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727096215629421014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c9e8fb5e944bc800446956248067c039e5c452de2651adf100841c5f062a431,PodSandboxId:e4cdc1cb583f42c1cf64e136ebe20075107963fc13da9144c568b67897e7e8a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727096215612519576,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c28bf3f4d80d4048804c687d1cec38aff92ff01ac7556fbe59fd2c73324b333,PodSandboxId:66109e91b1f789d247a6b16e21533a1c912ebdf0386ca6f2b2a221f5a873a754,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727096215567156911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476ad705f89683694506883a4ac379c2339d6097875e3a88c66a078cec041492,PodSandboxId:d5fd7dbc75ab3b9c7a6cdfac29a7ad6d6d093ed1004322d9f8640bbfe66c5388,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727096215548571609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d04e2f1-9ded-4abe-91aa-68ae1489bd0b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0c8b3d3e1c960       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   01a99cef826dd       busybox-7dff88458-4rksx
	6494b72ca963e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   09f40d2b50613       coredns-7c65d6cfc9-txcxz
	070d45bce8ff9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   287ae69fbba66       storage-provisioner
	cead05960724e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   d6346e81a93e3       coredns-7c65d6cfc9-6g9x2
	03670fd92c8a8       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   fa074de98ab0b       kindnet-j8l5t
	37b6ad938698e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   8efd7c52e41eb       kube-proxy-drj8m
	e5095373416a8       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   a65df228e8bfd       kube-vip-ha-097312
	9bfbdbe2c35f6       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   46a49b5018b58       etcd-ha-097312
	5c9e8fb5e944b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   e4cdc1cb583f4       kube-scheduler-ha-097312
	1c28bf3f4d80d       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   66109e91b1f78       kube-apiserver-ha-097312
	476ad705f8968       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   d5fd7dbc75ab3       kube-controller-manager-ha-097312
	
	
	==> coredns [6494b72ca963ec5a21179322ce5a1a3cd2ecf6063d12290ea8c06659ede25828] <==
	[INFO] 10.244.1.2:45817 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000653057s
	[INFO] 10.244.1.2:52272 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.003009815s
	[INFO] 10.244.0.4:33030 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115409s
	[INFO] 10.244.0.4:45577 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003386554s
	[INFO] 10.244.0.4:34507 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148722s
	[INFO] 10.244.0.4:56395 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000159124s
	[INFO] 10.244.2.2:48128 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168767s
	[INFO] 10.244.2.2:38686 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001366329s
	[INFO] 10.244.2.2:54280 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098386s
	[INFO] 10.244.2.2:36178 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083893s
	[INFO] 10.244.1.2:36479 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151724s
	[INFO] 10.244.1.2:52581 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000183399s
	[INFO] 10.244.1.2:36358 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00015472s
	[INFO] 10.244.0.4:37418 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198313s
	[INFO] 10.244.2.2:52660 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011216s
	[INFO] 10.244.1.2:33460 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123493s
	[INFO] 10.244.1.2:42619 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000187646s
	[INFO] 10.244.0.4:50282 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110854s
	[INFO] 10.244.0.4:48865 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000169177s
	[INFO] 10.244.0.4:52671 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110814s
	[INFO] 10.244.2.2:49013 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000236486s
	[INFO] 10.244.2.2:37600 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000236051s
	[INFO] 10.244.2.2:54687 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000137539s
	[INFO] 10.244.1.2:37754 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000237319s
	[INFO] 10.244.1.2:50571 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000167449s
	
	
	==> coredns [cead05960724ef0a7c164689c7f077c5173bf75483e09a02ea44bf3b5dde8cab] <==
	[INFO] 10.244.0.4:37338 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004244948s
	[INFO] 10.244.0.4:45643 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000226629s
	[INFO] 10.244.0.4:55589 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138142s
	[INFO] 10.244.0.4:39714 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089285s
	[INFO] 10.244.2.2:36050 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198766s
	[INFO] 10.244.2.2:57929 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002002291s
	[INFO] 10.244.2.2:39920 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000241567s
	[INFO] 10.244.2.2:40496 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084082s
	[INFO] 10.244.1.2:53956 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001953841s
	[INFO] 10.244.1.2:39693 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161735s
	[INFO] 10.244.1.2:59255 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001392042s
	[INFO] 10.244.1.2:33162 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000137674s
	[INFO] 10.244.1.2:56819 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135224s
	[INFO] 10.244.0.4:58065 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142108s
	[INFO] 10.244.0.4:49950 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114547s
	[INFO] 10.244.0.4:48467 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051186s
	[INFO] 10.244.2.2:57485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120774s
	[INFO] 10.244.2.2:47368 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105596s
	[INFO] 10.244.2.2:52953 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077623s
	[INFO] 10.244.1.2:45470 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011128s
	[INFO] 10.244.1.2:35601 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000157053s
	[INFO] 10.244.0.4:60925 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000610878s
	[INFO] 10.244.2.2:48335 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000176802s
	[INFO] 10.244.1.2:39758 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190843s
	[INFO] 10.244.1.2:35713 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110523s
	
	
	==> describe nodes <==
	Name:               ha-097312
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-097312
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-097312
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T12_57_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:57:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-097312
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:03:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:00:05 +0000   Mon, 23 Sep 2024 12:57:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:00:05 +0000   Mon, 23 Sep 2024 12:57:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:00:05 +0000   Mon, 23 Sep 2024 12:57:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:00:05 +0000   Mon, 23 Sep 2024 12:57:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.160
	  Hostname:    ha-097312
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fef43eb48e8a42b5815ed7c921d42333
	  System UUID:                fef43eb4-8e8a-42b5-815e-d7c921d42333
	  Boot ID:                    22749ef5-5a8a-4d9f-b42e-96dd2d4e32eb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4rksx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 coredns-7c65d6cfc9-6g9x2             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m27s
	  kube-system                 coredns-7c65d6cfc9-txcxz             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m27s
	  kube-system                 etcd-ha-097312                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m31s
	  kube-system                 kindnet-j8l5t                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m27s
	  kube-system                 kube-apiserver-ha-097312             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-controller-manager-ha-097312    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-proxy-drj8m                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-scheduler-ha-097312             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-vip-ha-097312                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m24s  kube-proxy       
	  Normal  Starting                 6m31s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m31s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m31s  kubelet          Node ha-097312 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m31s  kubelet          Node ha-097312 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m31s  kubelet          Node ha-097312 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m28s  node-controller  Node ha-097312 event: Registered Node ha-097312 in Controller
	  Normal  NodeReady                6m14s  kubelet          Node ha-097312 status is now: NodeReady
	  Normal  RegisteredNode           5m28s  node-controller  Node ha-097312 event: Registered Node ha-097312 in Controller
	  Normal  RegisteredNode           4m10s  node-controller  Node ha-097312 event: Registered Node ha-097312 in Controller
	
	
	Name:               ha-097312-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-097312-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-097312
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T12_57_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:57:57 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-097312-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:01:01 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 23 Sep 2024 12:59:59 +0000   Mon, 23 Sep 2024 13:01:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 23 Sep 2024 12:59:59 +0000   Mon, 23 Sep 2024 13:01:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 23 Sep 2024 12:59:59 +0000   Mon, 23 Sep 2024 13:01:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 23 Sep 2024 12:59:59 +0000   Mon, 23 Sep 2024 13:01:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.214
	  Hostname:    ha-097312-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 226ea4f6db5b44f7bdab73033cb7ae33
	  System UUID:                226ea4f6-db5b-44f7-bdab-73033cb7ae33
	  Boot ID:                    8cb64dab-25d7-4dcd-9c08-1dcc2d214767
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wz97n                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 etcd-ha-097312-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m34s
	  kube-system                 kindnet-hcclj                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m36s
	  kube-system                 kube-apiserver-ha-097312-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-controller-manager-ha-097312-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-proxy-z6ss5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 kube-scheduler-ha-097312-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-vip-ha-097312-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m32s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m36s (x8 over 5m37s)  kubelet          Node ha-097312-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m36s (x8 over 5m37s)  kubelet          Node ha-097312-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m36s (x7 over 5m37s)  kubelet          Node ha-097312-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m33s                  node-controller  Node ha-097312-m02 event: Registered Node ha-097312-m02 in Controller
	  Normal  RegisteredNode           5m28s                  node-controller  Node ha-097312-m02 event: Registered Node ha-097312-m02 in Controller
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-097312-m02 event: Registered Node ha-097312-m02 in Controller
	  Normal  NodeNotReady             110s                   node-controller  Node ha-097312-m02 status is now: NodeNotReady
	
	
	Name:               ha-097312-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-097312-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-097312
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T12_59_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:59:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-097312-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:03:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:00:15 +0000   Mon, 23 Sep 2024 12:59:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:00:15 +0000   Mon, 23 Sep 2024 12:59:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:00:15 +0000   Mon, 23 Sep 2024 12:59:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:00:15 +0000   Mon, 23 Sep 2024 12:59:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.174
	  Hostname:    ha-097312-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 21b2a00385684360824371ae7a980598
	  System UUID:                21b2a003-8568-4360-8243-71ae7a980598
	  Boot ID:                    960c8b17-8be2-4e75-85e5-dc8c84a6f034
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-tx8b9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 etcd-ha-097312-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kindnet-lcrdg                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m19s
	  kube-system                 kube-apiserver-ha-097312-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-ha-097312-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-proxy-vs524                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-scheduler-ha-097312-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 kube-vip-ha-097312-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m19s (x8 over 4m19s)  kubelet          Node ha-097312-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m19s (x8 over 4m19s)  kubelet          Node ha-097312-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m19s (x7 over 4m19s)  kubelet          Node ha-097312-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-097312-m03 event: Registered Node ha-097312-m03 in Controller
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-097312-m03 event: Registered Node ha-097312-m03 in Controller
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-097312-m03 event: Registered Node ha-097312-m03 in Controller
	
	
	Name:               ha-097312-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-097312-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-097312
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T13_00_25_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 13:00:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-097312-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:03:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:00:55 +0000   Mon, 23 Sep 2024 13:00:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:00:55 +0000   Mon, 23 Sep 2024 13:00:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:00:55 +0000   Mon, 23 Sep 2024 13:00:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:00:55 +0000   Mon, 23 Sep 2024 13:00:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.20
	  Hostname:    ha-097312-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 23903b49596849ed8163495c455231a4
	  System UUID:                23903b49-5968-49ed-8163-495c455231a4
	  Boot ID:                    b209787f-e977-446d-9180-ea83c0a28142
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pzs94       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m8s
	  kube-system                 kube-proxy-7hlnw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m2s                 kube-proxy       
	  Normal  RegisteredNode           3m8s                 node-controller  Node ha-097312-m04 event: Registered Node ha-097312-m04 in Controller
	  Normal  RegisteredNode           3m8s                 node-controller  Node ha-097312-m04 event: Registered Node ha-097312-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m8s (x2 over 3m9s)  kubelet          Node ha-097312-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m8s (x2 over 3m9s)  kubelet          Node ha-097312-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m8s (x2 over 3m9s)  kubelet          Node ha-097312-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m5s                 node-controller  Node ha-097312-m04 event: Registered Node ha-097312-m04 in Controller
	  Normal  NodeReady                2m48s                kubelet          Node ha-097312-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep23 12:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052097] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038111] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.768653] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.021290] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.561361] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.704633] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.056129] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055848] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.170191] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.146996] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.300750] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +3.930853] systemd-fstab-generator[752]: Ignoring "noauto" option for root device
	[  +3.791133] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.059635] kauditd_printk_skb: 158 callbacks suppressed
	[Sep23 12:57] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.088641] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.268527] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.165221] kauditd_printk_skb: 38 callbacks suppressed
	[Sep23 12:58] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [9bfbdbe2c35f63b185f28992c717601392287e693216d7332cfd0b4b6597c8ad] <==
	{"level":"warn","ts":"2024-09-23T13:03:33.142886Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:33.148021Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:33.152988Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:33.163864Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:33.170490Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:33.176359Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:33.179715Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:33.184596Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:33.184763Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:33.186735Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:33.192833Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:33.198825Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:33.208611Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:33.214851Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:33.218816Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:33.231027Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:33.238938Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:33.245783Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:33.249794Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:33.253430Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:33.257600Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:33.264136Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:33.271038Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:33.282129Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:33.284044Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 13:03:33 up 7 min,  0 users,  load average: 0.14, 0.24, 0.13
	Linux ha-097312 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [03670fd92c8a80c9d88e88b722428ce8ea7ed15a32a25c8c4c948685c15fe41c] <==
	I0923 13:02:59.639523       1 main.go:322] Node ha-097312-m04 has CIDR [10.244.3.0/24] 
	I0923 13:03:09.635495       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0923 13:03:09.635583       1 main.go:299] handling current node
	I0923 13:03:09.635613       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0923 13:03:09.635671       1 main.go:322] Node ha-097312-m02 has CIDR [10.244.1.0/24] 
	I0923 13:03:09.635944       1 main.go:295] Handling node with IPs: map[192.168.39.174:{}]
	I0923 13:03:09.635992       1 main.go:322] Node ha-097312-m03 has CIDR [10.244.2.0/24] 
	I0923 13:03:09.636057       1 main.go:295] Handling node with IPs: map[192.168.39.20:{}]
	I0923 13:03:09.636075       1 main.go:322] Node ha-097312-m04 has CIDR [10.244.3.0/24] 
	I0923 13:03:19.639090       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0923 13:03:19.639126       1 main.go:299] handling current node
	I0923 13:03:19.639140       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0923 13:03:19.639145       1 main.go:322] Node ha-097312-m02 has CIDR [10.244.1.0/24] 
	I0923 13:03:19.639271       1 main.go:295] Handling node with IPs: map[192.168.39.174:{}]
	I0923 13:03:19.639276       1 main.go:322] Node ha-097312-m03 has CIDR [10.244.2.0/24] 
	I0923 13:03:19.639330       1 main.go:295] Handling node with IPs: map[192.168.39.20:{}]
	I0923 13:03:19.639334       1 main.go:322] Node ha-097312-m04 has CIDR [10.244.3.0/24] 
	I0923 13:03:29.638527       1 main.go:295] Handling node with IPs: map[192.168.39.20:{}]
	I0923 13:03:29.638610       1 main.go:322] Node ha-097312-m04 has CIDR [10.244.3.0/24] 
	I0923 13:03:29.638800       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0923 13:03:29.638822       1 main.go:299] handling current node
	I0923 13:03:29.638844       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0923 13:03:29.638848       1 main.go:322] Node ha-097312-m02 has CIDR [10.244.1.0/24] 
	I0923 13:03:29.638897       1 main.go:295] Handling node with IPs: map[192.168.39.174:{}]
	I0923 13:03:29.638914       1 main.go:322] Node ha-097312-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [1c28bf3f4d80d4048804c687d1cec38aff92ff01ac7556fbe59fd2c73324b333] <==
	I0923 12:57:02.020359       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0923 12:57:02.088327       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0923 12:57:06.152802       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0923 12:57:06.755775       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0923 12:57:57.925529       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0923 12:57:57.925590       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 6.353µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0923 12:57:57.926736       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0923 12:57:57.927891       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0923 12:57:57.929106       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.691541ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0923 12:59:48.392448       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33954: use of closed network connection
	E0923 12:59:48.613880       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33958: use of closed network connection
	E0923 12:59:48.808088       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58634: use of closed network connection
	E0923 12:59:49.001780       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58648: use of closed network connection
	E0923 12:59:49.197483       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58666: use of closed network connection
	E0923 12:59:49.377774       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58694: use of closed network connection
	E0923 12:59:49.575983       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58712: use of closed network connection
	E0923 12:59:49.768426       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58734: use of closed network connection
	E0923 12:59:49.967451       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58756: use of closed network connection
	E0923 12:59:50.265392       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58784: use of closed network connection
	E0923 12:59:50.450981       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58804: use of closed network connection
	E0923 12:59:50.652809       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58810: use of closed network connection
	E0923 12:59:50.861752       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58822: use of closed network connection
	E0923 12:59:51.064797       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58830: use of closed network connection
	E0923 12:59:51.264921       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58846: use of closed network connection
	W0923 13:01:20.906998       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.160 192.168.39.174]
	
	
	==> kube-controller-manager [476ad705f89683694506883a4ac379c2339d6097875e3a88c66a078cec041492] <==
	I0923 13:00:25.249956       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-097312-m04" podCIDRs=["10.244.3.0/24"]
	I0923 13:00:25.250021       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:25.250063       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:25.268205       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:25.370449       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:25.456902       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:25.813447       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:25.983304       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-097312-m04"
	I0923 13:00:25.983773       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:26.090111       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:28.408814       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:28.484815       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:35.660172       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:45.897287       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:45.897415       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-097312-m04"
	I0923 13:00:45.912394       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:46.005249       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:55.964721       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:01:43.436073       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m02"
	I0923 13:01:43.436177       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-097312-m04"
	I0923 13:01:43.460744       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m02"
	I0923 13:01:43.587511       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.511152ms"
	I0923 13:01:43.588537       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.099µs"
	I0923 13:01:46.104982       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m02"
	I0923 13:01:48.741428       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m02"
	
	
	==> kube-proxy [37b6ad938698e107c07b01a67dcc4f6f6f2895a6b2ddc7a269056adab117c0ce] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 12:57:08.497927       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 12:57:08.513689       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.160"]
	E0923 12:57:08.513839       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 12:57:08.553172       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 12:57:08.553258       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 12:57:08.553295       1 server_linux.go:169] "Using iptables Proxier"
	I0923 12:57:08.556859       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 12:57:08.557876       1 server.go:483] "Version info" version="v1.31.1"
	I0923 12:57:08.557939       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 12:57:08.564961       1 config.go:199] "Starting service config controller"
	I0923 12:57:08.565367       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 12:57:08.565715       1 config.go:328] "Starting node config controller"
	I0923 12:57:08.570600       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 12:57:08.566364       1 config.go:105] "Starting endpoint slice config controller"
	I0923 12:57:08.570712       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 12:57:08.570719       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 12:57:08.666413       1 shared_informer.go:320] Caches are synced for service config
	I0923 12:57:08.670755       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5c9e8fb5e944bc800446956248067c039e5c452de2651adf100841c5f062a431] <==
	W0923 12:57:00.057793       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 12:57:00.058398       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.080608       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 12:57:00.080826       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.112818       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 12:57:00.112990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.129261       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 12:57:00.129830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.181934       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 12:57:00.182022       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.183285       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 12:57:00.183358       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.190093       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 12:57:00.190177       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.223708       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 12:57:00.223794       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.255027       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 12:57:00.255136       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.582968       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 12:57:00.583073       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0923 12:57:02.534371       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0923 12:59:14.854178       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-vs524\": pod kube-proxy-vs524 is already assigned to node \"ha-097312-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-vs524" node="ha-097312-m03"
	E0923 12:59:14.854357       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 92738649-c52b-44d5-866b-8cda751a538c(kube-system/kube-proxy-vs524) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-vs524"
	E0923 12:59:14.854394       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-vs524\": pod kube-proxy-vs524 is already assigned to node \"ha-097312-m03\"" pod="kube-system/kube-proxy-vs524"
	I0923 12:59:14.854436       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-vs524" node="ha-097312-m03"
	
	
	==> kubelet <==
	Sep 23 13:02:02 ha-097312 kubelet[1304]: E0923 13:02:02.214007    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096522213607138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:02 ha-097312 kubelet[1304]: E0923 13:02:02.214059    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096522213607138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:12 ha-097312 kubelet[1304]: E0923 13:02:12.219070    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096532215431820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:12 ha-097312 kubelet[1304]: E0923 13:02:12.219206    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096532215431820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:22 ha-097312 kubelet[1304]: E0923 13:02:22.225821    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096542223481825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:22 ha-097312 kubelet[1304]: E0923 13:02:22.230227    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096542223481825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:32 ha-097312 kubelet[1304]: E0923 13:02:32.232689    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096552232228787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:32 ha-097312 kubelet[1304]: E0923 13:02:32.233031    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096552232228787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:42 ha-097312 kubelet[1304]: E0923 13:02:42.235021    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096562234565302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:42 ha-097312 kubelet[1304]: E0923 13:02:42.235083    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096562234565302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:52 ha-097312 kubelet[1304]: E0923 13:02:52.237647    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096572237152536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:52 ha-097312 kubelet[1304]: E0923 13:02:52.237938    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096572237152536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:03:02 ha-097312 kubelet[1304]: E0923 13:03:02.165544    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 13:03:02 ha-097312 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 13:03:02 ha-097312 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 13:03:02 ha-097312 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 13:03:02 ha-097312 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 13:03:02 ha-097312 kubelet[1304]: E0923 13:03:02.240514    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096582240150204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:03:02 ha-097312 kubelet[1304]: E0923 13:03:02.240606    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096582240150204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:03:12 ha-097312 kubelet[1304]: E0923 13:03:12.243234    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096592242789885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:03:12 ha-097312 kubelet[1304]: E0923 13:03:12.243281    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096592242789885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:03:22 ha-097312 kubelet[1304]: E0923 13:03:22.245580    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096602245012698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:03:22 ha-097312 kubelet[1304]: E0923 13:03:22.246002    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096602245012698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:03:32 ha-097312 kubelet[1304]: E0923 13:03:32.247916    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096612247450002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:03:32 ha-097312 kubelet[1304]: E0923 13:03:32.247947    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096612247450002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-097312 -n ha-097312
helpers_test.go:261: (dbg) Run:  kubectl --context ha-097312 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-amd64 -p ha-097312 status -v=7 --alsologtostderr: (3.876376974s)
ha_test.go:435: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-097312 status -v=7 --alsologtostderr": 
ha_test.go:438: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-097312 status -v=7 --alsologtostderr": 
ha_test.go:441: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-097312 status -v=7 --alsologtostderr": 
ha_test.go:444: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-097312 status -v=7 --alsologtostderr": 
ha_test.go:448: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-097312 -n ha-097312
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-097312 logs -n 25: (1.435224623s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m03:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312:/home/docker/cp-test_ha-097312-m03_ha-097312.txt                       |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n ha-097312 sudo cat                                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m03_ha-097312.txt                                 |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m03:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m02:/home/docker/cp-test_ha-097312-m03_ha-097312-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n ha-097312-m02 sudo cat                                          | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m03_ha-097312-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m03:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04:/home/docker/cp-test_ha-097312-m03_ha-097312-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n ha-097312-m04 sudo cat                                          | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m03_ha-097312-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-097312 cp testdata/cp-test.txt                                                | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m04:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3809348295/001/cp-test_ha-097312-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m04:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312:/home/docker/cp-test_ha-097312-m04_ha-097312.txt                       |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n ha-097312 sudo cat                                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m04_ha-097312.txt                                 |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m04:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m02:/home/docker/cp-test_ha-097312-m04_ha-097312-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n ha-097312-m02 sudo cat                                          | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m04_ha-097312-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m04:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m03:/home/docker/cp-test_ha-097312-m04_ha-097312-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n ha-097312-m03 sudo cat                                          | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m04_ha-097312-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-097312 node stop m02 -v=7                                                     | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-097312 node start m02 -v=7                                                    | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 12:56:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 12:56:21.828511  682373 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:56:21.828805  682373 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:56:21.828814  682373 out.go:358] Setting ErrFile to fd 2...
	I0923 12:56:21.828819  682373 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:56:21.829029  682373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-662205/.minikube/bin
	I0923 12:56:21.829675  682373 out.go:352] Setting JSON to false
	I0923 12:56:21.830688  682373 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9525,"bootTime":1727086657,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 12:56:21.830795  682373 start.go:139] virtualization: kvm guest
	I0923 12:56:21.833290  682373 out.go:177] * [ha-097312] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 12:56:21.834872  682373 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 12:56:21.834925  682373 notify.go:220] Checking for updates...
	I0923 12:56:21.837758  682373 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:56:21.839025  682373 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 12:56:21.840177  682373 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:56:21.841224  682373 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 12:56:21.842534  682373 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 12:56:21.843976  682373 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:56:21.880376  682373 out.go:177] * Using the kvm2 driver based on user configuration
	I0923 12:56:21.881602  682373 start.go:297] selected driver: kvm2
	I0923 12:56:21.881616  682373 start.go:901] validating driver "kvm2" against <nil>
	I0923 12:56:21.881629  682373 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 12:56:21.882531  682373 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:56:21.882644  682373 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19690-662205/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 12:56:21.899127  682373 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 12:56:21.899181  682373 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 12:56:21.899449  682373 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:56:21.899480  682373 cni.go:84] Creating CNI manager for ""
	I0923 12:56:21.899527  682373 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0923 12:56:21.899535  682373 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 12:56:21.899626  682373 start.go:340] cluster config:
	{Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-097312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0923 12:56:21.899742  682373 iso.go:125] acquiring lock: {Name:mkb968a95eae3838cd5c328cf3385c2ef4ff2c8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:56:21.901896  682373 out.go:177] * Starting "ha-097312" primary control-plane node in "ha-097312" cluster
	I0923 12:56:21.903202  682373 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 12:56:21.903247  682373 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 12:56:21.903256  682373 cache.go:56] Caching tarball of preloaded images
	I0923 12:56:21.903357  682373 preload.go:172] Found /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 12:56:21.903371  682373 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 12:56:21.903879  682373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 12:56:21.903923  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json: {Name:mkf732f530eb47d72142f084d9eb3cd0edcde9eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:56:21.904117  682373 start.go:360] acquireMachinesLock for ha-097312: {Name:mka98570d4b4becad22300323f1f88e64743eec3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 12:56:21.904165  682373 start.go:364] duration metric: took 29.656µs to acquireMachinesLock for "ha-097312"
	I0923 12:56:21.904184  682373 start.go:93] Provisioning new machine with config: &{Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-097312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:56:21.904282  682373 start.go:125] createHost starting for "" (driver="kvm2")
	I0923 12:56:21.905963  682373 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 12:56:21.906128  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:56:21.906175  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:56:21.921537  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41699
	I0923 12:56:21.922061  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:56:21.922650  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:56:21.922667  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:56:21.923007  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:56:21.923179  682373 main.go:141] libmachine: (ha-097312) Calling .GetMachineName
	I0923 12:56:21.923321  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:56:21.923466  682373 start.go:159] libmachine.API.Create for "ha-097312" (driver="kvm2")
	I0923 12:56:21.923507  682373 client.go:168] LocalClient.Create starting
	I0923 12:56:21.923545  682373 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem
	I0923 12:56:21.923585  682373 main.go:141] libmachine: Decoding PEM data...
	I0923 12:56:21.923623  682373 main.go:141] libmachine: Parsing certificate...
	I0923 12:56:21.923700  682373 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem
	I0923 12:56:21.923738  682373 main.go:141] libmachine: Decoding PEM data...
	I0923 12:56:21.923763  682373 main.go:141] libmachine: Parsing certificate...
	I0923 12:56:21.923785  682373 main.go:141] libmachine: Running pre-create checks...
	I0923 12:56:21.923796  682373 main.go:141] libmachine: (ha-097312) Calling .PreCreateCheck
	I0923 12:56:21.924185  682373 main.go:141] libmachine: (ha-097312) Calling .GetConfigRaw
	I0923 12:56:21.924615  682373 main.go:141] libmachine: Creating machine...
	I0923 12:56:21.924630  682373 main.go:141] libmachine: (ha-097312) Calling .Create
	I0923 12:56:21.924800  682373 main.go:141] libmachine: (ha-097312) Creating KVM machine...
	I0923 12:56:21.926163  682373 main.go:141] libmachine: (ha-097312) DBG | found existing default KVM network
	I0923 12:56:21.926884  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:21.926751  682396 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111f0}
	I0923 12:56:21.926933  682373 main.go:141] libmachine: (ha-097312) DBG | created network xml: 
	I0923 12:56:21.926948  682373 main.go:141] libmachine: (ha-097312) DBG | <network>
	I0923 12:56:21.926958  682373 main.go:141] libmachine: (ha-097312) DBG |   <name>mk-ha-097312</name>
	I0923 12:56:21.926973  682373 main.go:141] libmachine: (ha-097312) DBG |   <dns enable='no'/>
	I0923 12:56:21.926984  682373 main.go:141] libmachine: (ha-097312) DBG |   
	I0923 12:56:21.926995  682373 main.go:141] libmachine: (ha-097312) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0923 12:56:21.927005  682373 main.go:141] libmachine: (ha-097312) DBG |     <dhcp>
	I0923 12:56:21.927010  682373 main.go:141] libmachine: (ha-097312) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0923 12:56:21.927018  682373 main.go:141] libmachine: (ha-097312) DBG |     </dhcp>
	I0923 12:56:21.927023  682373 main.go:141] libmachine: (ha-097312) DBG |   </ip>
	I0923 12:56:21.927028  682373 main.go:141] libmachine: (ha-097312) DBG |   
	I0923 12:56:21.927037  682373 main.go:141] libmachine: (ha-097312) DBG | </network>
	I0923 12:56:21.927049  682373 main.go:141] libmachine: (ha-097312) DBG | 
	I0923 12:56:21.932476  682373 main.go:141] libmachine: (ha-097312) DBG | trying to create private KVM network mk-ha-097312 192.168.39.0/24...
	I0923 12:56:22.007044  682373 main.go:141] libmachine: (ha-097312) DBG | private KVM network mk-ha-097312 192.168.39.0/24 created
	I0923 12:56:22.007081  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:22.007015  682396 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:56:22.007094  682373 main.go:141] libmachine: (ha-097312) Setting up store path in /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312 ...
	I0923 12:56:22.007109  682373 main.go:141] libmachine: (ha-097312) Building disk image from file:///home/jenkins/minikube-integration/19690-662205/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 12:56:22.007154  682373 main.go:141] libmachine: (ha-097312) Downloading /home/jenkins/minikube-integration/19690-662205/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19690-662205/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 12:56:22.288956  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:22.288821  682396 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa...
	I0923 12:56:22.447093  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:22.446935  682396 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/ha-097312.rawdisk...
	I0923 12:56:22.447150  682373 main.go:141] libmachine: (ha-097312) DBG | Writing magic tar header
	I0923 12:56:22.447245  682373 main.go:141] libmachine: (ha-097312) DBG | Writing SSH key tar header
	I0923 12:56:22.447298  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:22.447079  682396 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312 ...
	I0923 12:56:22.447319  682373 main.go:141] libmachine: (ha-097312) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312 (perms=drwx------)
	I0923 12:56:22.447334  682373 main.go:141] libmachine: (ha-097312) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube/machines (perms=drwxr-xr-x)
	I0923 12:56:22.447344  682373 main.go:141] libmachine: (ha-097312) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube (perms=drwxr-xr-x)
	I0923 12:56:22.447360  682373 main.go:141] libmachine: (ha-097312) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205 (perms=drwxrwxr-x)
	I0923 12:56:22.447372  682373 main.go:141] libmachine: (ha-097312) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 12:56:22.447381  682373 main.go:141] libmachine: (ha-097312) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312
	I0923 12:56:22.447394  682373 main.go:141] libmachine: (ha-097312) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 12:56:22.447407  682373 main.go:141] libmachine: (ha-097312) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube/machines
	I0923 12:56:22.447421  682373 main.go:141] libmachine: (ha-097312) Creating domain...
	I0923 12:56:22.447439  682373 main.go:141] libmachine: (ha-097312) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:56:22.447455  682373 main.go:141] libmachine: (ha-097312) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205
	I0923 12:56:22.447468  682373 main.go:141] libmachine: (ha-097312) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 12:56:22.447479  682373 main.go:141] libmachine: (ha-097312) DBG | Checking permissions on dir: /home/jenkins
	I0923 12:56:22.447492  682373 main.go:141] libmachine: (ha-097312) DBG | Checking permissions on dir: /home
	I0923 12:56:22.447500  682373 main.go:141] libmachine: (ha-097312) DBG | Skipping /home - not owner
	I0923 12:56:22.448456  682373 main.go:141] libmachine: (ha-097312) define libvirt domain using xml: 
	I0923 12:56:22.448482  682373 main.go:141] libmachine: (ha-097312) <domain type='kvm'>
	I0923 12:56:22.448488  682373 main.go:141] libmachine: (ha-097312)   <name>ha-097312</name>
	I0923 12:56:22.448493  682373 main.go:141] libmachine: (ha-097312)   <memory unit='MiB'>2200</memory>
	I0923 12:56:22.448498  682373 main.go:141] libmachine: (ha-097312)   <vcpu>2</vcpu>
	I0923 12:56:22.448502  682373 main.go:141] libmachine: (ha-097312)   <features>
	I0923 12:56:22.448506  682373 main.go:141] libmachine: (ha-097312)     <acpi/>
	I0923 12:56:22.448510  682373 main.go:141] libmachine: (ha-097312)     <apic/>
	I0923 12:56:22.448514  682373 main.go:141] libmachine: (ha-097312)     <pae/>
	I0923 12:56:22.448526  682373 main.go:141] libmachine: (ha-097312)     
	I0923 12:56:22.448561  682373 main.go:141] libmachine: (ha-097312)   </features>
	I0923 12:56:22.448583  682373 main.go:141] libmachine: (ha-097312)   <cpu mode='host-passthrough'>
	I0923 12:56:22.448588  682373 main.go:141] libmachine: (ha-097312)   
	I0923 12:56:22.448594  682373 main.go:141] libmachine: (ha-097312)   </cpu>
	I0923 12:56:22.448600  682373 main.go:141] libmachine: (ha-097312)   <os>
	I0923 12:56:22.448607  682373 main.go:141] libmachine: (ha-097312)     <type>hvm</type>
	I0923 12:56:22.448634  682373 main.go:141] libmachine: (ha-097312)     <boot dev='cdrom'/>
	I0923 12:56:22.448653  682373 main.go:141] libmachine: (ha-097312)     <boot dev='hd'/>
	I0923 12:56:22.448665  682373 main.go:141] libmachine: (ha-097312)     <bootmenu enable='no'/>
	I0923 12:56:22.448674  682373 main.go:141] libmachine: (ha-097312)   </os>
	I0923 12:56:22.448693  682373 main.go:141] libmachine: (ha-097312)   <devices>
	I0923 12:56:22.448701  682373 main.go:141] libmachine: (ha-097312)     <disk type='file' device='cdrom'>
	I0923 12:56:22.448711  682373 main.go:141] libmachine: (ha-097312)       <source file='/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/boot2docker.iso'/>
	I0923 12:56:22.448722  682373 main.go:141] libmachine: (ha-097312)       <target dev='hdc' bus='scsi'/>
	I0923 12:56:22.448735  682373 main.go:141] libmachine: (ha-097312)       <readonly/>
	I0923 12:56:22.448746  682373 main.go:141] libmachine: (ha-097312)     </disk>
	I0923 12:56:22.448754  682373 main.go:141] libmachine: (ha-097312)     <disk type='file' device='disk'>
	I0923 12:56:22.448761  682373 main.go:141] libmachine: (ha-097312)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 12:56:22.448771  682373 main.go:141] libmachine: (ha-097312)       <source file='/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/ha-097312.rawdisk'/>
	I0923 12:56:22.448779  682373 main.go:141] libmachine: (ha-097312)       <target dev='hda' bus='virtio'/>
	I0923 12:56:22.448783  682373 main.go:141] libmachine: (ha-097312)     </disk>
	I0923 12:56:22.448790  682373 main.go:141] libmachine: (ha-097312)     <interface type='network'>
	I0923 12:56:22.448799  682373 main.go:141] libmachine: (ha-097312)       <source network='mk-ha-097312'/>
	I0923 12:56:22.448805  682373 main.go:141] libmachine: (ha-097312)       <model type='virtio'/>
	I0923 12:56:22.448810  682373 main.go:141] libmachine: (ha-097312)     </interface>
	I0923 12:56:22.448820  682373 main.go:141] libmachine: (ha-097312)     <interface type='network'>
	I0923 12:56:22.448833  682373 main.go:141] libmachine: (ha-097312)       <source network='default'/>
	I0923 12:56:22.448840  682373 main.go:141] libmachine: (ha-097312)       <model type='virtio'/>
	I0923 12:56:22.448845  682373 main.go:141] libmachine: (ha-097312)     </interface>
	I0923 12:56:22.448855  682373 main.go:141] libmachine: (ha-097312)     <serial type='pty'>
	I0923 12:56:22.448860  682373 main.go:141] libmachine: (ha-097312)       <target port='0'/>
	I0923 12:56:22.448869  682373 main.go:141] libmachine: (ha-097312)     </serial>
	I0923 12:56:22.448875  682373 main.go:141] libmachine: (ha-097312)     <console type='pty'>
	I0923 12:56:22.448885  682373 main.go:141] libmachine: (ha-097312)       <target type='serial' port='0'/>
	I0923 12:56:22.448897  682373 main.go:141] libmachine: (ha-097312)     </console>
	I0923 12:56:22.448912  682373 main.go:141] libmachine: (ha-097312)     <rng model='virtio'>
	I0923 12:56:22.448925  682373 main.go:141] libmachine: (ha-097312)       <backend model='random'>/dev/random</backend>
	I0923 12:56:22.448933  682373 main.go:141] libmachine: (ha-097312)     </rng>
	I0923 12:56:22.448940  682373 main.go:141] libmachine: (ha-097312)     
	I0923 12:56:22.448949  682373 main.go:141] libmachine: (ha-097312)     
	I0923 12:56:22.448957  682373 main.go:141] libmachine: (ha-097312)   </devices>
	I0923 12:56:22.448965  682373 main.go:141] libmachine: (ha-097312) </domain>
	I0923 12:56:22.448975  682373 main.go:141] libmachine: (ha-097312) 
	I0923 12:56:22.453510  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:86:5c:23 in network default
	I0923 12:56:22.454136  682373 main.go:141] libmachine: (ha-097312) Ensuring networks are active...
	I0923 12:56:22.454160  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:22.455025  682373 main.go:141] libmachine: (ha-097312) Ensuring network default is active
	I0923 12:56:22.455403  682373 main.go:141] libmachine: (ha-097312) Ensuring network mk-ha-097312 is active
	I0923 12:56:22.455910  682373 main.go:141] libmachine: (ha-097312) Getting domain xml...
	I0923 12:56:22.456804  682373 main.go:141] libmachine: (ha-097312) Creating domain...
	I0923 12:56:23.684285  682373 main.go:141] libmachine: (ha-097312) Waiting to get IP...
	I0923 12:56:23.685050  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:23.685483  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:23.685549  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:23.685457  682396 retry.go:31] will retry after 284.819092ms: waiting for machine to come up
	I0923 12:56:23.972224  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:23.972712  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:23.972742  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:23.972658  682396 retry.go:31] will retry after 296.568661ms: waiting for machine to come up
	I0923 12:56:24.271431  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:24.271859  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:24.271878  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:24.271837  682396 retry.go:31] will retry after 305.883088ms: waiting for machine to come up
	I0923 12:56:24.579449  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:24.579888  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:24.579915  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:24.579844  682396 retry.go:31] will retry after 417.526062ms: waiting for machine to come up
	I0923 12:56:24.999494  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:24.999869  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:24.999897  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:24.999819  682396 retry.go:31] will retry after 647.110055ms: waiting for machine to come up
	I0923 12:56:25.648547  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:25.649112  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:25.649144  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:25.649045  682396 retry.go:31] will retry after 699.974926ms: waiting for machine to come up
	I0923 12:56:26.350970  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:26.351427  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:26.351457  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:26.351401  682396 retry.go:31] will retry after 822.151225ms: waiting for machine to come up
	I0923 12:56:27.175278  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:27.175659  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:27.175688  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:27.175617  682396 retry.go:31] will retry after 1.471324905s: waiting for machine to come up
	I0923 12:56:28.649431  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:28.649912  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:28.649939  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:28.649865  682396 retry.go:31] will retry after 1.835415418s: waiting for machine to come up
	I0923 12:56:30.487327  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:30.487788  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:30.487842  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:30.487762  682396 retry.go:31] will retry after 1.452554512s: waiting for machine to come up
	I0923 12:56:31.941929  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:31.942466  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:31.942496  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:31.942406  682396 retry.go:31] will retry after 2.833337463s: waiting for machine to come up
	I0923 12:56:34.777034  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:34.777417  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:34.777435  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:34.777385  682396 retry.go:31] will retry after 2.506824406s: waiting for machine to come up
	I0923 12:56:37.285508  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:37.285975  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:37.286004  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:37.285923  682396 retry.go:31] will retry after 2.872661862s: waiting for machine to come up
	I0923 12:56:40.162076  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:40.162525  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find current IP address of domain ha-097312 in network mk-ha-097312
	I0923 12:56:40.162542  682373 main.go:141] libmachine: (ha-097312) DBG | I0923 12:56:40.162478  682396 retry.go:31] will retry after 3.815832653s: waiting for machine to come up
	I0923 12:56:43.980644  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:43.981295  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has current primary IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:43.981341  682373 main.go:141] libmachine: (ha-097312) Found IP for machine: 192.168.39.160
	I0923 12:56:43.981355  682373 main.go:141] libmachine: (ha-097312) Reserving static IP address...
	I0923 12:56:43.981713  682373 main.go:141] libmachine: (ha-097312) DBG | unable to find host DHCP lease matching {name: "ha-097312", mac: "52:54:00:06:7f:c5", ip: "192.168.39.160"} in network mk-ha-097312
	I0923 12:56:44.063688  682373 main.go:141] libmachine: (ha-097312) DBG | Getting to WaitForSSH function...
	I0923 12:56:44.063720  682373 main.go:141] libmachine: (ha-097312) Reserved static IP address: 192.168.39.160
	I0923 12:56:44.063760  682373 main.go:141] libmachine: (ha-097312) Waiting for SSH to be available...
	I0923 12:56:44.066589  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.067094  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:minikube Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:44.067121  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.067273  682373 main.go:141] libmachine: (ha-097312) DBG | Using SSH client type: external
	I0923 12:56:44.067298  682373 main.go:141] libmachine: (ha-097312) DBG | Using SSH private key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa (-rw-------)
	I0923 12:56:44.067335  682373 main.go:141] libmachine: (ha-097312) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.160 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 12:56:44.067346  682373 main.go:141] libmachine: (ha-097312) DBG | About to run SSH command:
	I0923 12:56:44.067388  682373 main.go:141] libmachine: (ha-097312) DBG | exit 0
	I0923 12:56:44.194221  682373 main.go:141] libmachine: (ha-097312) DBG | SSH cmd err, output: <nil>: 
	I0923 12:56:44.194546  682373 main.go:141] libmachine: (ha-097312) KVM machine creation complete!
	I0923 12:56:44.194794  682373 main.go:141] libmachine: (ha-097312) Calling .GetConfigRaw
	I0923 12:56:44.195383  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:56:44.195600  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:56:44.195740  682373 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 12:56:44.195754  682373 main.go:141] libmachine: (ha-097312) Calling .GetState
	I0923 12:56:44.197002  682373 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 12:56:44.197015  682373 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 12:56:44.197021  682373 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 12:56:44.197025  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:44.200085  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.200458  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:44.200480  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.200781  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:44.201011  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:44.201209  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:44.201346  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:44.201528  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:56:44.201732  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 12:56:44.201744  682373 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 12:56:44.309556  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:56:44.309581  682373 main.go:141] libmachine: Detecting the provisioner...
	I0923 12:56:44.309589  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:44.312757  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.313154  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:44.313202  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.313393  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:44.313633  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:44.313899  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:44.314086  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:44.314302  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:56:44.314501  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 12:56:44.314513  682373 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 12:56:44.422704  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 12:56:44.422779  682373 main.go:141] libmachine: found compatible host: buildroot
	I0923 12:56:44.422786  682373 main.go:141] libmachine: Provisioning with buildroot...
	I0923 12:56:44.422796  682373 main.go:141] libmachine: (ha-097312) Calling .GetMachineName
	I0923 12:56:44.423069  682373 buildroot.go:166] provisioning hostname "ha-097312"
	I0923 12:56:44.423101  682373 main.go:141] libmachine: (ha-097312) Calling .GetMachineName
	I0923 12:56:44.423298  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:44.426419  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.426747  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:44.426769  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.426988  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:44.427186  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:44.427341  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:44.427471  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:44.427647  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:56:44.427840  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 12:56:44.427852  682373 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-097312 && echo "ha-097312" | sudo tee /etc/hostname
	I0923 12:56:44.548083  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-097312
	
	I0923 12:56:44.548119  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:44.550930  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.551237  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:44.551281  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.551446  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:44.551667  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:44.551843  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:44.551987  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:44.552153  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:56:44.552393  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 12:56:44.552421  682373 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-097312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-097312/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-097312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:56:44.667004  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:56:44.667043  682373 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19690-662205/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-662205/.minikube}
	I0923 12:56:44.667068  682373 buildroot.go:174] setting up certificates
	I0923 12:56:44.667085  682373 provision.go:84] configureAuth start
	I0923 12:56:44.667098  682373 main.go:141] libmachine: (ha-097312) Calling .GetMachineName
	I0923 12:56:44.667438  682373 main.go:141] libmachine: (ha-097312) Calling .GetIP
	I0923 12:56:44.670311  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.670792  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:44.670845  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.670910  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:44.673549  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.673871  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:44.673897  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.674038  682373 provision.go:143] copyHostCerts
	I0923 12:56:44.674077  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 12:56:44.674137  682373 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem, removing ...
	I0923 12:56:44.674159  682373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 12:56:44.674245  682373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem (1123 bytes)
	I0923 12:56:44.674380  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 12:56:44.674409  682373 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem, removing ...
	I0923 12:56:44.674417  682373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 12:56:44.674460  682373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem (1675 bytes)
	I0923 12:56:44.674580  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 12:56:44.674634  682373 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem, removing ...
	I0923 12:56:44.674642  682373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 12:56:44.674698  682373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem (1082 bytes)
	I0923 12:56:44.674832  682373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem org=jenkins.ha-097312 san=[127.0.0.1 192.168.39.160 ha-097312 localhost minikube]
	I0923 12:56:44.904863  682373 provision.go:177] copyRemoteCerts
	I0923 12:56:44.904957  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:56:44.904984  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:44.908150  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.908582  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:44.908619  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:44.908884  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:44.909135  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:44.909342  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:44.909527  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:56:44.992087  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 12:56:44.992199  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0923 12:56:45.016139  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 12:56:45.016229  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 12:56:45.039856  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 12:56:45.040045  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 12:56:45.063092  682373 provision.go:87] duration metric: took 395.980147ms to configureAuth
	I0923 12:56:45.063127  682373 buildroot.go:189] setting minikube options for container-runtime
	I0923 12:56:45.063302  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:56:45.063398  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:45.066695  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.067038  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:45.067071  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.067240  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:45.067488  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:45.067676  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:45.067817  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:45.068046  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:56:45.068308  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 12:56:45.068326  682373 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 12:56:45.283348  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 12:56:45.283372  682373 main.go:141] libmachine: Checking connection to Docker...
	I0923 12:56:45.283380  682373 main.go:141] libmachine: (ha-097312) Calling .GetURL
	I0923 12:56:45.284754  682373 main.go:141] libmachine: (ha-097312) DBG | Using libvirt version 6000000
	I0923 12:56:45.287147  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.287577  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:45.287606  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.287745  682373 main.go:141] libmachine: Docker is up and running!
	I0923 12:56:45.287766  682373 main.go:141] libmachine: Reticulating splines...
	I0923 12:56:45.287773  682373 client.go:171] duration metric: took 23.364255409s to LocalClient.Create
	I0923 12:56:45.287797  682373 start.go:167] duration metric: took 23.364332593s to libmachine.API.Create "ha-097312"
	I0923 12:56:45.287811  682373 start.go:293] postStartSetup for "ha-097312" (driver="kvm2")
	I0923 12:56:45.287824  682373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:56:45.287841  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:56:45.288125  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:56:45.288161  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:45.290362  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.290827  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:45.290857  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.291024  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:45.291233  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:45.291406  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:45.291630  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:56:45.376057  682373 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:56:45.380314  682373 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 12:56:45.380346  682373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/addons for local assets ...
	I0923 12:56:45.380412  682373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/files for local assets ...
	I0923 12:56:45.380483  682373 filesync.go:149] local asset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> 6694472.pem in /etc/ssl/certs
	I0923 12:56:45.380492  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> /etc/ssl/certs/6694472.pem
	I0923 12:56:45.380593  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 12:56:45.390109  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 12:56:45.414414  682373 start.go:296] duration metric: took 126.585208ms for postStartSetup
	I0923 12:56:45.414519  682373 main.go:141] libmachine: (ha-097312) Calling .GetConfigRaw
	I0923 12:56:45.415223  682373 main.go:141] libmachine: (ha-097312) Calling .GetIP
	I0923 12:56:45.418035  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.418499  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:45.418535  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.418757  682373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 12:56:45.418971  682373 start.go:128] duration metric: took 23.514676713s to createHost
	I0923 12:56:45.419008  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:45.421290  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.421582  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:45.421607  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.421739  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:45.421993  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:45.422231  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:45.422397  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:45.422624  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:56:45.422888  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 12:56:45.422913  682373 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 12:56:45.530668  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727096205.504964904
	
	I0923 12:56:45.530696  682373 fix.go:216] guest clock: 1727096205.504964904
	I0923 12:56:45.530705  682373 fix.go:229] Guest: 2024-09-23 12:56:45.504964904 +0000 UTC Remote: 2024-09-23 12:56:45.41898604 +0000 UTC m=+23.627481107 (delta=85.978864ms)
	I0923 12:56:45.530768  682373 fix.go:200] guest clock delta is within tolerance: 85.978864ms
	I0923 12:56:45.530777  682373 start.go:83] releasing machines lock for "ha-097312", held for 23.626602839s
	I0923 12:56:45.530803  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:56:45.531129  682373 main.go:141] libmachine: (ha-097312) Calling .GetIP
	I0923 12:56:45.533942  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.534282  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:45.534313  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.534510  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:56:45.535018  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:56:45.535175  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:56:45.535268  682373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 12:56:45.535329  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:45.535407  682373 ssh_runner.go:195] Run: cat /version.json
	I0923 12:56:45.535432  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:56:45.538344  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.538693  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:45.538718  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.538736  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.538916  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:45.539107  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:45.539142  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:45.539168  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:45.539301  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:45.539401  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:56:45.539491  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:56:45.539522  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:56:45.539669  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:56:45.539871  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:56:45.615078  682373 ssh_runner.go:195] Run: systemctl --version
	I0923 12:56:45.652339  682373 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 12:56:45.814596  682373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 12:56:45.820480  682373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 12:56:45.820567  682373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 12:56:45.837076  682373 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 12:56:45.837109  682373 start.go:495] detecting cgroup driver to use...
	I0923 12:56:45.837204  682373 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 12:56:45.852886  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:56:45.867319  682373 docker.go:217] disabling cri-docker service (if available) ...
	I0923 12:56:45.867387  682373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 12:56:45.881106  682373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 12:56:45.895047  682373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 12:56:46.010122  682373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 12:56:46.160036  682373 docker.go:233] disabling docker service ...
	I0923 12:56:46.160166  682373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 12:56:46.174281  682373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 12:56:46.187289  682373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 12:56:46.315823  682373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 12:56:46.451742  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 12:56:46.465159  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:56:46.485490  682373 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 12:56:46.485567  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:56:46.496172  682373 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 12:56:46.496276  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:56:46.506865  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:56:46.517182  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:56:46.527559  682373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 12:56:46.538362  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:56:46.548742  682373 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:56:46.565850  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:56:46.576416  682373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 12:56:46.586314  682373 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 12:56:46.586391  682373 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 12:56:46.600960  682373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 12:56:46.613686  682373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:56:46.747213  682373 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 12:56:46.833362  682373 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 12:56:46.833455  682373 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 12:56:46.838407  682373 start.go:563] Will wait 60s for crictl version
	I0923 12:56:46.838481  682373 ssh_runner.go:195] Run: which crictl
	I0923 12:56:46.842254  682373 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 12:56:46.881238  682373 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 12:56:46.881313  682373 ssh_runner.go:195] Run: crio --version
	I0923 12:56:46.910755  682373 ssh_runner.go:195] Run: crio --version
	I0923 12:56:46.941180  682373 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 12:56:46.942573  682373 main.go:141] libmachine: (ha-097312) Calling .GetIP
	I0923 12:56:46.945291  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:46.945654  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:56:46.945683  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:56:46.945901  682373 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 12:56:46.950351  682373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:56:46.963572  682373 kubeadm.go:883] updating cluster {Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 12:56:46.963689  682373 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 12:56:46.963752  682373 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 12:56:46.995863  682373 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0923 12:56:46.995949  682373 ssh_runner.go:195] Run: which lz4
	I0923 12:56:47.000077  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0923 12:56:47.000199  682373 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 12:56:47.004245  682373 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 12:56:47.004290  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0923 12:56:48.233778  682373 crio.go:462] duration metric: took 1.233615545s to copy over tarball
	I0923 12:56:48.233872  682373 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 12:56:50.293806  682373 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.059892855s)
	I0923 12:56:50.293864  682373 crio.go:469] duration metric: took 2.060053222s to extract the tarball
	I0923 12:56:50.293875  682373 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 12:56:50.330288  682373 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 12:56:50.382422  682373 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 12:56:50.382453  682373 cache_images.go:84] Images are preloaded, skipping loading
	I0923 12:56:50.382463  682373 kubeadm.go:934] updating node { 192.168.39.160 8443 v1.31.1 crio true true} ...
	I0923 12:56:50.382618  682373 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-097312 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 12:56:50.382706  682373 ssh_runner.go:195] Run: crio config
	I0923 12:56:50.429046  682373 cni.go:84] Creating CNI manager for ""
	I0923 12:56:50.429071  682373 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 12:56:50.429081  682373 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 12:56:50.429114  682373 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.160 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-097312 NodeName:ha-097312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.160"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.160 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 12:56:50.429251  682373 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.160
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-097312"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.160
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.160"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 12:56:50.429291  682373 kube-vip.go:115] generating kube-vip config ...
	I0923 12:56:50.429336  682373 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 12:56:50.447284  682373 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 12:56:50.447397  682373 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0923 12:56:50.447453  682373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 12:56:50.457555  682373 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 12:56:50.457631  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0923 12:56:50.467361  682373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0923 12:56:50.484221  682373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 12:56:50.501136  682373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0923 12:56:50.517771  682373 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0923 12:56:50.535030  682373 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0923 12:56:50.538926  682373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:56:50.550841  682373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:56:50.685055  682373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:56:50.702466  682373 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312 for IP: 192.168.39.160
	I0923 12:56:50.702500  682373 certs.go:194] generating shared ca certs ...
	I0923 12:56:50.702525  682373 certs.go:226] acquiring lock for ca certs: {Name:mk5f47b34d40554f07f6507fea971236e4735d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:56:50.702732  682373 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key
	I0923 12:56:50.702796  682373 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key
	I0923 12:56:50.702811  682373 certs.go:256] generating profile certs ...
	I0923 12:56:50.702903  682373 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.key
	I0923 12:56:50.702928  682373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.crt with IP's: []
	I0923 12:56:50.839973  682373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.crt ...
	I0923 12:56:50.840005  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.crt: {Name:mk3ec295cf75d5f37a812267f291d008d2d41849 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:56:50.840201  682373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.key ...
	I0923 12:56:50.840215  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.key: {Name:mk2a9a6301a953bccf7179cf3fcd9c6c49523a28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:56:50.840321  682373 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.3e258ae9
	I0923 12:56:50.840339  682373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.3e258ae9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.160 192.168.39.254]
	I0923 12:56:50.957561  682373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.3e258ae9 ...
	I0923 12:56:50.957598  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.3e258ae9: {Name:mke07e7dcb821169b2edcdcfe37c1283edab6d93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:56:50.957795  682373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.3e258ae9 ...
	I0923 12:56:50.957814  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.3e258ae9: {Name:mk473437de8fd0279ccc88430a74364f16849fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:56:50.957935  682373 certs.go:381] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.3e258ae9 -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt
	I0923 12:56:50.958016  682373 certs.go:385] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.3e258ae9 -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key
	I0923 12:56:50.958070  682373 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key
	I0923 12:56:50.958086  682373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt with IP's: []
	I0923 12:56:51.039985  682373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt ...
	I0923 12:56:51.040029  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt: {Name:mk08fe599b3bb9f9eafe363d4dcfa2dc4583d108 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:56:51.040291  682373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key ...
	I0923 12:56:51.040316  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key: {Name:mke55afec0b5332166375bf6241593073b8f40da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:56:51.040432  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 12:56:51.040459  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 12:56:51.040472  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 12:56:51.040484  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 12:56:51.040497  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 12:56:51.040509  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 12:56:51.040524  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 12:56:51.040539  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 12:56:51.040619  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem (1338 bytes)
	W0923 12:56:51.040660  682373 certs.go:480] ignoring /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447_empty.pem, impossibly tiny 0 bytes
	I0923 12:56:51.040672  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 12:56:51.040698  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem (1082 bytes)
	I0923 12:56:51.040726  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem (1123 bytes)
	I0923 12:56:51.040750  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem (1675 bytes)
	I0923 12:56:51.040798  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 12:56:51.040830  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem -> /usr/share/ca-certificates/669447.pem
	I0923 12:56:51.040846  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> /usr/share/ca-certificates/6694472.pem
	I0923 12:56:51.040863  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:56:51.041476  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 12:56:51.067263  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 12:56:51.091814  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 12:56:51.115009  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 12:56:51.138682  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0923 12:56:51.162647  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 12:56:51.186729  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 12:56:51.210155  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 12:56:51.233576  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem --> /usr/share/ca-certificates/669447.pem (1338 bytes)
	I0923 12:56:51.256633  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /usr/share/ca-certificates/6694472.pem (1708 bytes)
	I0923 12:56:51.279649  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 12:56:51.303438  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 12:56:51.320192  682373 ssh_runner.go:195] Run: openssl version
	I0923 12:56:51.326310  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6694472.pem && ln -fs /usr/share/ca-certificates/6694472.pem /etc/ssl/certs/6694472.pem"
	I0923 12:56:51.337813  682373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6694472.pem
	I0923 12:56:51.342410  682373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 12:47 /usr/share/ca-certificates/6694472.pem
	I0923 12:56:51.342469  682373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6694472.pem
	I0923 12:56:51.348141  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6694472.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 12:56:51.358951  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 12:56:51.369927  682373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:56:51.374498  682373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 12:28 /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:56:51.374569  682373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:56:51.380225  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 12:56:51.390788  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669447.pem && ln -fs /usr/share/ca-certificates/669447.pem /etc/ssl/certs/669447.pem"
	I0923 12:56:51.401357  682373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669447.pem
	I0923 12:56:51.405984  682373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 12:47 /usr/share/ca-certificates/669447.pem
	I0923 12:56:51.406065  682373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669447.pem
	I0923 12:56:51.411938  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/669447.pem /etc/ssl/certs/51391683.0"
	I0923 12:56:51.422798  682373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 12:56:51.426778  682373 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 12:56:51.426837  682373 kubeadm.go:392] StartCluster: {Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:56:51.426911  682373 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 12:56:51.426969  682373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 12:56:51.467074  682373 cri.go:89] found id: ""
	I0923 12:56:51.467159  682373 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 12:56:51.482686  682373 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 12:56:51.497867  682373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 12:56:51.512428  682373 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 12:56:51.512454  682373 kubeadm.go:157] found existing configuration files:
	
	I0923 12:56:51.512511  682373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 12:56:51.529985  682373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 12:56:51.530093  682373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 12:56:51.542142  682373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 12:56:51.550802  682373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 12:56:51.550892  682373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 12:56:51.560648  682373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 12:56:51.570247  682373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 12:56:51.570324  682373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 12:56:51.580148  682373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 12:56:51.589038  682373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 12:56:51.589128  682373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 12:56:51.598472  682373 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 12:56:51.709387  682373 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 12:56:51.709477  682373 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 12:56:51.804679  682373 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 12:56:51.804878  682373 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 12:56:51.805013  682373 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 12:56:51.813809  682373 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 12:56:51.816648  682373 out.go:235]   - Generating certificates and keys ...
	I0923 12:56:51.817490  682373 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 12:56:51.817573  682373 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 12:56:51.891229  682373 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 12:56:51.977862  682373 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 12:56:52.256371  682373 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 12:56:52.418600  682373 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 12:56:52.566134  682373 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 12:56:52.566417  682373 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-097312 localhost] and IPs [192.168.39.160 127.0.0.1 ::1]
	I0923 12:56:52.754339  682373 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 12:56:52.754631  682373 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-097312 localhost] and IPs [192.168.39.160 127.0.0.1 ::1]
	I0923 12:56:52.984244  682373 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 12:56:53.199395  682373 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 12:56:53.333105  682373 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 12:56:53.333280  682373 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 12:56:53.475215  682373 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 12:56:53.703024  682373 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 12:56:53.843337  682373 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 12:56:54.031020  682373 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 12:56:54.307973  682373 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 12:56:54.308522  682373 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 12:56:54.312025  682373 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 12:56:54.415301  682373 out.go:235]   - Booting up control plane ...
	I0923 12:56:54.415467  682373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 12:56:54.415596  682373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 12:56:54.415675  682373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 12:56:54.415768  682373 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 12:56:54.415870  682373 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 12:56:54.415955  682373 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 12:56:54.481155  682373 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 12:56:54.481329  682373 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 12:56:54.981948  682373 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.226424ms
	I0923 12:56:54.982063  682373 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 12:57:01.058259  682373 kubeadm.go:310] [api-check] The API server is healthy after 6.078664089s
	I0923 12:57:01.078738  682373 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 12:57:01.102575  682373 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 12:57:01.638520  682373 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 12:57:01.638793  682373 kubeadm.go:310] [mark-control-plane] Marking the node ha-097312 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 12:57:01.654796  682373 kubeadm.go:310] [bootstrap-token] Using token: tjz9o5.go3sw7ivocitep6z
	I0923 12:57:01.656792  682373 out.go:235]   - Configuring RBAC rules ...
	I0923 12:57:01.656993  682373 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 12:57:01.670875  682373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 12:57:01.681661  682373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 12:57:01.686098  682373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 12:57:01.693270  682373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 12:57:01.698752  682373 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 12:57:01.717473  682373 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 12:57:02.034772  682373 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 12:57:02.465304  682373 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 12:57:02.466345  682373 kubeadm.go:310] 
	I0923 12:57:02.466441  682373 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 12:57:02.466453  682373 kubeadm.go:310] 
	I0923 12:57:02.466593  682373 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 12:57:02.466605  682373 kubeadm.go:310] 
	I0923 12:57:02.466637  682373 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 12:57:02.466743  682373 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 12:57:02.466828  682373 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 12:57:02.466838  682373 kubeadm.go:310] 
	I0923 12:57:02.466914  682373 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 12:57:02.466921  682373 kubeadm.go:310] 
	I0923 12:57:02.466984  682373 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 12:57:02.466993  682373 kubeadm.go:310] 
	I0923 12:57:02.467078  682373 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 12:57:02.467176  682373 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 12:57:02.467278  682373 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 12:57:02.467287  682373 kubeadm.go:310] 
	I0923 12:57:02.467400  682373 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 12:57:02.467489  682373 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 12:57:02.467520  682373 kubeadm.go:310] 
	I0923 12:57:02.467645  682373 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tjz9o5.go3sw7ivocitep6z \
	I0923 12:57:02.467825  682373 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fc29dc81bde6bbaef9ddbc91342eaa216189e2d814cc53e215aada75bebb1ff \
	I0923 12:57:02.467866  682373 kubeadm.go:310] 	--control-plane 
	I0923 12:57:02.467876  682373 kubeadm.go:310] 
	I0923 12:57:02.468002  682373 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 12:57:02.468014  682373 kubeadm.go:310] 
	I0923 12:57:02.468111  682373 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tjz9o5.go3sw7ivocitep6z \
	I0923 12:57:02.468232  682373 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fc29dc81bde6bbaef9ddbc91342eaa216189e2d814cc53e215aada75bebb1ff 
	I0923 12:57:02.469853  682373 kubeadm.go:310] W0923 12:56:51.688284     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:57:02.470263  682373 kubeadm.go:310] W0923 12:56:51.689248     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:57:02.470417  682373 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 12:57:02.470437  682373 cni.go:84] Creating CNI manager for ""
	I0923 12:57:02.470446  682373 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 12:57:02.472858  682373 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0923 12:57:02.474323  682373 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0923 12:57:02.479759  682373 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0923 12:57:02.479789  682373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0923 12:57:02.504445  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0923 12:57:02.891714  682373 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 12:57:02.891813  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:57:02.891852  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-097312 minikube.k8s.io/updated_at=2024_09_23T12_57_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=ha-097312 minikube.k8s.io/primary=true
	I0923 12:57:03.052741  682373 ops.go:34] apiserver oom_adj: -16
	I0923 12:57:03.052880  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:57:03.553199  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:57:04.053904  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:57:04.553368  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:57:05.053003  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:57:05.553371  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:57:06.053924  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:57:06.553890  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:57:06.654158  682373 kubeadm.go:1113] duration metric: took 3.762424286s to wait for elevateKubeSystemPrivileges
	I0923 12:57:06.654208  682373 kubeadm.go:394] duration metric: took 15.227377014s to StartCluster
	I0923 12:57:06.654235  682373 settings.go:142] acquiring lock: {Name:mk3da09e51125fc906a9e1276ab490fc7b26b03f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:57:06.654340  682373 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 12:57:06.655289  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/kubeconfig: {Name:mk213d38080414fbe499f6509d2653fd99103348 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:57:06.655604  682373 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:57:06.655633  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 12:57:06.655653  682373 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 12:57:06.655642  682373 start.go:241] waiting for startup goroutines ...
	I0923 12:57:06.655745  682373 addons.go:69] Setting storage-provisioner=true in profile "ha-097312"
	I0923 12:57:06.655797  682373 addons.go:234] Setting addon storage-provisioner=true in "ha-097312"
	I0923 12:57:06.655834  682373 host.go:66] Checking if "ha-097312" exists ...
	I0923 12:57:06.655835  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:57:06.655752  682373 addons.go:69] Setting default-storageclass=true in profile "ha-097312"
	I0923 12:57:06.655926  682373 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-097312"
	I0923 12:57:06.656390  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:57:06.656400  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:57:06.656428  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:57:06.656430  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:57:06.672616  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43027
	I0923 12:57:06.672985  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44797
	I0923 12:57:06.673168  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:57:06.673414  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:57:06.673768  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:57:06.673789  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:57:06.673930  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:57:06.673964  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:57:06.674169  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:57:06.674315  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:57:06.674361  682373 main.go:141] libmachine: (ha-097312) Calling .GetState
	I0923 12:57:06.674868  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:57:06.674975  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:57:06.676732  682373 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 12:57:06.677135  682373 kapi.go:59] client config for ha-097312: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.crt", KeyFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.key", CAFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 12:57:06.677778  682373 cert_rotation.go:140] Starting client certificate rotation controller
	I0923 12:57:06.678102  682373 addons.go:234] Setting addon default-storageclass=true in "ha-097312"
	I0923 12:57:06.678152  682373 host.go:66] Checking if "ha-097312" exists ...
	I0923 12:57:06.678585  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:57:06.678637  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:57:06.691933  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39239
	I0923 12:57:06.692442  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:57:06.693010  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:57:06.693034  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:57:06.693367  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:57:06.693647  682373 main.go:141] libmachine: (ha-097312) Calling .GetState
	I0923 12:57:06.694766  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34185
	I0923 12:57:06.695192  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:57:06.695549  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:57:06.695721  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:57:06.695737  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:57:06.696032  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:57:06.696640  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:57:06.696692  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:57:06.698001  682373 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 12:57:06.699592  682373 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:57:06.699614  682373 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 12:57:06.699636  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:57:06.702740  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:57:06.703120  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:57:06.703136  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:57:06.703423  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:57:06.703599  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:57:06.703736  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:57:06.703871  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:57:06.713026  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44863
	I0923 12:57:06.713478  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:57:06.714138  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:57:06.714157  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:57:06.714441  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:57:06.714648  682373 main.go:141] libmachine: (ha-097312) Calling .GetState
	I0923 12:57:06.716436  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:57:06.716678  682373 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 12:57:06.716694  682373 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 12:57:06.716712  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:57:06.720029  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:57:06.720524  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:57:06.720549  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:57:06.720868  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:57:06.721094  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:57:06.721284  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:57:06.721415  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:57:06.794261  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 12:57:06.837196  682373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:57:06.948150  682373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 12:57:07.376765  682373 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0923 12:57:07.497295  682373 main.go:141] libmachine: Making call to close driver server
	I0923 12:57:07.497329  682373 main.go:141] libmachine: (ha-097312) Calling .Close
	I0923 12:57:07.497329  682373 main.go:141] libmachine: Making call to close driver server
	I0923 12:57:07.497348  682373 main.go:141] libmachine: (ha-097312) Calling .Close
	I0923 12:57:07.497659  682373 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:57:07.497676  682373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:57:07.497686  682373 main.go:141] libmachine: Making call to close driver server
	I0923 12:57:07.497695  682373 main.go:141] libmachine: (ha-097312) Calling .Close
	I0923 12:57:07.497795  682373 main.go:141] libmachine: (ha-097312) DBG | Closing plugin on server side
	I0923 12:57:07.497861  682373 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:57:07.497875  682373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:57:07.497884  682373 main.go:141] libmachine: Making call to close driver server
	I0923 12:57:07.497899  682373 main.go:141] libmachine: (ha-097312) Calling .Close
	I0923 12:57:07.497941  682373 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:57:07.497955  682373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:57:07.498024  682373 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 12:57:07.498041  682373 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 12:57:07.498159  682373 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:57:07.498194  682373 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0923 12:57:07.498211  682373 round_trippers.go:469] Request Headers:
	I0923 12:57:07.498225  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:57:07.498231  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:57:07.498235  682373 main.go:141] libmachine: (ha-097312) DBG | Closing plugin on server side
	I0923 12:57:07.498196  682373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:57:07.509952  682373 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0923 12:57:07.510797  682373 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0923 12:57:07.510817  682373 round_trippers.go:469] Request Headers:
	I0923 12:57:07.510829  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:57:07.510834  682373 round_trippers.go:473]     Content-Type: application/json
	I0923 12:57:07.510840  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:57:07.513677  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:57:07.513894  682373 main.go:141] libmachine: Making call to close driver server
	I0923 12:57:07.513920  682373 main.go:141] libmachine: (ha-097312) Calling .Close
	I0923 12:57:07.514234  682373 main.go:141] libmachine: Successfully made call to close driver server
	I0923 12:57:07.514256  682373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 12:57:07.516273  682373 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0923 12:57:07.517649  682373 addons.go:510] duration metric: took 861.992785ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0923 12:57:07.517685  682373 start.go:246] waiting for cluster config update ...
	I0923 12:57:07.517698  682373 start.go:255] writing updated cluster config ...
	I0923 12:57:07.519680  682373 out.go:201] 
	I0923 12:57:07.521371  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:57:07.521468  682373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 12:57:07.523127  682373 out.go:177] * Starting "ha-097312-m02" control-plane node in "ha-097312" cluster
	I0923 12:57:07.524508  682373 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 12:57:07.524539  682373 cache.go:56] Caching tarball of preloaded images
	I0923 12:57:07.524641  682373 preload.go:172] Found /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 12:57:07.524654  682373 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 12:57:07.524741  682373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 12:57:07.524952  682373 start.go:360] acquireMachinesLock for ha-097312-m02: {Name:mka98570d4b4becad22300323f1f88e64743eec3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 12:57:07.525025  682373 start.go:364] duration metric: took 44.618µs to acquireMachinesLock for "ha-097312-m02"
	I0923 12:57:07.525047  682373 start.go:93] Provisioning new machine with config: &{Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:57:07.525150  682373 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0923 12:57:07.527045  682373 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 12:57:07.527133  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:57:07.527160  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:57:07.542505  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36903
	I0923 12:57:07.542956  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:57:07.543542  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:57:07.543583  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:57:07.543972  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:57:07.544208  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetMachineName
	I0923 12:57:07.544349  682373 main.go:141] libmachine: (ha-097312-m02) Calling .DriverName
	I0923 12:57:07.544507  682373 start.go:159] libmachine.API.Create for "ha-097312" (driver="kvm2")
	I0923 12:57:07.544535  682373 client.go:168] LocalClient.Create starting
	I0923 12:57:07.544570  682373 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem
	I0923 12:57:07.544615  682373 main.go:141] libmachine: Decoding PEM data...
	I0923 12:57:07.544634  682373 main.go:141] libmachine: Parsing certificate...
	I0923 12:57:07.544717  682373 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem
	I0923 12:57:07.544765  682373 main.go:141] libmachine: Decoding PEM data...
	I0923 12:57:07.544805  682373 main.go:141] libmachine: Parsing certificate...
	I0923 12:57:07.544827  682373 main.go:141] libmachine: Running pre-create checks...
	I0923 12:57:07.544832  682373 main.go:141] libmachine: (ha-097312-m02) Calling .PreCreateCheck
	I0923 12:57:07.545067  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetConfigRaw
	I0923 12:57:07.545510  682373 main.go:141] libmachine: Creating machine...
	I0923 12:57:07.545532  682373 main.go:141] libmachine: (ha-097312-m02) Calling .Create
	I0923 12:57:07.545663  682373 main.go:141] libmachine: (ha-097312-m02) Creating KVM machine...
	I0923 12:57:07.547155  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found existing default KVM network
	I0923 12:57:07.547384  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found existing private KVM network mk-ha-097312
	I0923 12:57:07.547524  682373 main.go:141] libmachine: (ha-097312-m02) Setting up store path in /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02 ...
	I0923 12:57:07.547546  682373 main.go:141] libmachine: (ha-097312-m02) Building disk image from file:///home/jenkins/minikube-integration/19690-662205/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 12:57:07.547624  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:07.547504  682740 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:57:07.547712  682373 main.go:141] libmachine: (ha-097312-m02) Downloading /home/jenkins/minikube-integration/19690-662205/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19690-662205/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 12:57:07.802486  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:07.802340  682740 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/id_rsa...
	I0923 12:57:07.948816  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:07.948688  682740 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/ha-097312-m02.rawdisk...
	I0923 12:57:07.948868  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Writing magic tar header
	I0923 12:57:07.948878  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Writing SSH key tar header
	I0923 12:57:07.948886  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:07.948826  682740 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02 ...
	I0923 12:57:07.949014  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02
	I0923 12:57:07.949056  682373 main.go:141] libmachine: (ha-097312-m02) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02 (perms=drwx------)
	I0923 12:57:07.949066  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube/machines
	I0923 12:57:07.949084  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:57:07.949106  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205
	I0923 12:57:07.949118  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 12:57:07.949129  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Checking permissions on dir: /home/jenkins
	I0923 12:57:07.949139  682373 main.go:141] libmachine: (ha-097312-m02) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube/machines (perms=drwxr-xr-x)
	I0923 12:57:07.949156  682373 main.go:141] libmachine: (ha-097312-m02) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube (perms=drwxr-xr-x)
	I0923 12:57:07.949167  682373 main.go:141] libmachine: (ha-097312-m02) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205 (perms=drwxrwxr-x)
	I0923 12:57:07.949178  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Checking permissions on dir: /home
	I0923 12:57:07.949191  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Skipping /home - not owner
	I0923 12:57:07.949205  682373 main.go:141] libmachine: (ha-097312-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 12:57:07.949217  682373 main.go:141] libmachine: (ha-097312-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 12:57:07.949229  682373 main.go:141] libmachine: (ha-097312-m02) Creating domain...
	I0923 12:57:07.950603  682373 main.go:141] libmachine: (ha-097312-m02) define libvirt domain using xml: 
	I0923 12:57:07.950628  682373 main.go:141] libmachine: (ha-097312-m02) <domain type='kvm'>
	I0923 12:57:07.950638  682373 main.go:141] libmachine: (ha-097312-m02)   <name>ha-097312-m02</name>
	I0923 12:57:07.950648  682373 main.go:141] libmachine: (ha-097312-m02)   <memory unit='MiB'>2200</memory>
	I0923 12:57:07.950655  682373 main.go:141] libmachine: (ha-097312-m02)   <vcpu>2</vcpu>
	I0923 12:57:07.950665  682373 main.go:141] libmachine: (ha-097312-m02)   <features>
	I0923 12:57:07.950672  682373 main.go:141] libmachine: (ha-097312-m02)     <acpi/>
	I0923 12:57:07.950678  682373 main.go:141] libmachine: (ha-097312-m02)     <apic/>
	I0923 12:57:07.950685  682373 main.go:141] libmachine: (ha-097312-m02)     <pae/>
	I0923 12:57:07.950692  682373 main.go:141] libmachine: (ha-097312-m02)     
	I0923 12:57:07.950704  682373 main.go:141] libmachine: (ha-097312-m02)   </features>
	I0923 12:57:07.950712  682373 main.go:141] libmachine: (ha-097312-m02)   <cpu mode='host-passthrough'>
	I0923 12:57:07.950720  682373 main.go:141] libmachine: (ha-097312-m02)   
	I0923 12:57:07.950726  682373 main.go:141] libmachine: (ha-097312-m02)   </cpu>
	I0923 12:57:07.950755  682373 main.go:141] libmachine: (ha-097312-m02)   <os>
	I0923 12:57:07.950767  682373 main.go:141] libmachine: (ha-097312-m02)     <type>hvm</type>
	I0923 12:57:07.950775  682373 main.go:141] libmachine: (ha-097312-m02)     <boot dev='cdrom'/>
	I0923 12:57:07.950783  682373 main.go:141] libmachine: (ha-097312-m02)     <boot dev='hd'/>
	I0923 12:57:07.950795  682373 main.go:141] libmachine: (ha-097312-m02)     <bootmenu enable='no'/>
	I0923 12:57:07.950802  682373 main.go:141] libmachine: (ha-097312-m02)   </os>
	I0923 12:57:07.950814  682373 main.go:141] libmachine: (ha-097312-m02)   <devices>
	I0923 12:57:07.950825  682373 main.go:141] libmachine: (ha-097312-m02)     <disk type='file' device='cdrom'>
	I0923 12:57:07.950841  682373 main.go:141] libmachine: (ha-097312-m02)       <source file='/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/boot2docker.iso'/>
	I0923 12:57:07.950853  682373 main.go:141] libmachine: (ha-097312-m02)       <target dev='hdc' bus='scsi'/>
	I0923 12:57:07.950887  682373 main.go:141] libmachine: (ha-097312-m02)       <readonly/>
	I0923 12:57:07.950906  682373 main.go:141] libmachine: (ha-097312-m02)     </disk>
	I0923 12:57:07.950914  682373 main.go:141] libmachine: (ha-097312-m02)     <disk type='file' device='disk'>
	I0923 12:57:07.950920  682373 main.go:141] libmachine: (ha-097312-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 12:57:07.950931  682373 main.go:141] libmachine: (ha-097312-m02)       <source file='/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/ha-097312-m02.rawdisk'/>
	I0923 12:57:07.950938  682373 main.go:141] libmachine: (ha-097312-m02)       <target dev='hda' bus='virtio'/>
	I0923 12:57:07.950943  682373 main.go:141] libmachine: (ha-097312-m02)     </disk>
	I0923 12:57:07.950950  682373 main.go:141] libmachine: (ha-097312-m02)     <interface type='network'>
	I0923 12:57:07.950956  682373 main.go:141] libmachine: (ha-097312-m02)       <source network='mk-ha-097312'/>
	I0923 12:57:07.950962  682373 main.go:141] libmachine: (ha-097312-m02)       <model type='virtio'/>
	I0923 12:57:07.950967  682373 main.go:141] libmachine: (ha-097312-m02)     </interface>
	I0923 12:57:07.950973  682373 main.go:141] libmachine: (ha-097312-m02)     <interface type='network'>
	I0923 12:57:07.950979  682373 main.go:141] libmachine: (ha-097312-m02)       <source network='default'/>
	I0923 12:57:07.950988  682373 main.go:141] libmachine: (ha-097312-m02)       <model type='virtio'/>
	I0923 12:57:07.951022  682373 main.go:141] libmachine: (ha-097312-m02)     </interface>
	I0923 12:57:07.951047  682373 main.go:141] libmachine: (ha-097312-m02)     <serial type='pty'>
	I0923 12:57:07.951056  682373 main.go:141] libmachine: (ha-097312-m02)       <target port='0'/>
	I0923 12:57:07.951071  682373 main.go:141] libmachine: (ha-097312-m02)     </serial>
	I0923 12:57:07.951083  682373 main.go:141] libmachine: (ha-097312-m02)     <console type='pty'>
	I0923 12:57:07.951094  682373 main.go:141] libmachine: (ha-097312-m02)       <target type='serial' port='0'/>
	I0923 12:57:07.951104  682373 main.go:141] libmachine: (ha-097312-m02)     </console>
	I0923 12:57:07.951110  682373 main.go:141] libmachine: (ha-097312-m02)     <rng model='virtio'>
	I0923 12:57:07.951122  682373 main.go:141] libmachine: (ha-097312-m02)       <backend model='random'>/dev/random</backend>
	I0923 12:57:07.951132  682373 main.go:141] libmachine: (ha-097312-m02)     </rng>
	I0923 12:57:07.951139  682373 main.go:141] libmachine: (ha-097312-m02)     
	I0923 12:57:07.951147  682373 main.go:141] libmachine: (ha-097312-m02)     
	I0923 12:57:07.951155  682373 main.go:141] libmachine: (ha-097312-m02)   </devices>
	I0923 12:57:07.951170  682373 main.go:141] libmachine: (ha-097312-m02) </domain>
	I0923 12:57:07.951208  682373 main.go:141] libmachine: (ha-097312-m02) 
	I0923 12:57:07.958737  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:28:cf:23 in network default
	I0923 12:57:07.959212  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:07.959260  682373 main.go:141] libmachine: (ha-097312-m02) Ensuring networks are active...
	I0923 12:57:07.960010  682373 main.go:141] libmachine: (ha-097312-m02) Ensuring network default is active
	I0923 12:57:07.960399  682373 main.go:141] libmachine: (ha-097312-m02) Ensuring network mk-ha-097312 is active
	I0923 12:57:07.960872  682373 main.go:141] libmachine: (ha-097312-m02) Getting domain xml...
	I0923 12:57:07.961596  682373 main.go:141] libmachine: (ha-097312-m02) Creating domain...
	I0923 12:57:09.236958  682373 main.go:141] libmachine: (ha-097312-m02) Waiting to get IP...
	I0923 12:57:09.237872  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:09.238432  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:09.238520  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:09.238409  682740 retry.go:31] will retry after 258.996903ms: waiting for machine to come up
	I0923 12:57:09.498848  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:09.499271  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:09.499300  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:09.499216  682740 retry.go:31] will retry after 390.01253ms: waiting for machine to come up
	I0923 12:57:09.890994  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:09.891540  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:09.891572  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:09.891465  682740 retry.go:31] will retry after 371.935324ms: waiting for machine to come up
	I0923 12:57:10.265244  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:10.265618  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:10.265655  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:10.265585  682740 retry.go:31] will retry after 510.543016ms: waiting for machine to come up
	I0923 12:57:10.777241  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:10.777723  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:10.777746  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:10.777656  682740 retry.go:31] will retry after 522.337855ms: waiting for machine to come up
	I0923 12:57:11.302530  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:11.303002  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:11.303023  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:11.302970  682740 retry.go:31] will retry after 745.395576ms: waiting for machine to come up
	I0923 12:57:12.049866  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:12.050223  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:12.050249  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:12.050180  682740 retry.go:31] will retry after 791.252666ms: waiting for machine to come up
	I0923 12:57:12.842707  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:12.843212  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:12.843250  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:12.843171  682740 retry.go:31] will retry after 1.03083414s: waiting for machine to come up
	I0923 12:57:13.876177  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:13.876677  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:13.876711  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:13.876621  682740 retry.go:31] will retry after 1.686909518s: waiting for machine to come up
	I0923 12:57:15.565124  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:15.565550  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:15.565574  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:15.565500  682740 retry.go:31] will retry after 1.944756654s: waiting for machine to come up
	I0923 12:57:17.512182  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:17.512709  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:17.512742  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:17.512627  682740 retry.go:31] will retry after 2.056101086s: waiting for machine to come up
	I0923 12:57:19.569989  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:19.570397  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:19.570422  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:19.570360  682740 retry.go:31] will retry after 2.406826762s: waiting for machine to come up
	I0923 12:57:21.980169  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:21.980856  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:21.980887  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:21.980793  682740 retry.go:31] will retry after 3.38134268s: waiting for machine to come up
	I0923 12:57:25.364366  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:25.364892  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find current IP address of domain ha-097312-m02 in network mk-ha-097312
	I0923 12:57:25.364919  682373 main.go:141] libmachine: (ha-097312-m02) DBG | I0923 12:57:25.364848  682740 retry.go:31] will retry after 4.745352265s: waiting for machine to come up
	I0923 12:57:30.113738  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.114252  682373 main.go:141] libmachine: (ha-097312-m02) Found IP for machine: 192.168.39.214
	I0923 12:57:30.114286  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has current primary IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.114295  682373 main.go:141] libmachine: (ha-097312-m02) Reserving static IP address...
	I0923 12:57:30.114645  682373 main.go:141] libmachine: (ha-097312-m02) DBG | unable to find host DHCP lease matching {name: "ha-097312-m02", mac: "52:54:00:aa:9c:e4", ip: "192.168.39.214"} in network mk-ha-097312
	I0923 12:57:30.195004  682373 main.go:141] libmachine: (ha-097312-m02) Reserved static IP address: 192.168.39.214
	I0923 12:57:30.195029  682373 main.go:141] libmachine: (ha-097312-m02) Waiting for SSH to be available...
	I0923 12:57:30.195051  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Getting to WaitForSSH function...
	I0923 12:57:30.198064  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.198485  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:minikube Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:30.198516  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.198655  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Using SSH client type: external
	I0923 12:57:30.198683  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/id_rsa (-rw-------)
	I0923 12:57:30.198704  682373 main.go:141] libmachine: (ha-097312-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 12:57:30.198716  682373 main.go:141] libmachine: (ha-097312-m02) DBG | About to run SSH command:
	I0923 12:57:30.198732  682373 main.go:141] libmachine: (ha-097312-m02) DBG | exit 0
	I0923 12:57:30.322102  682373 main.go:141] libmachine: (ha-097312-m02) DBG | SSH cmd err, output: <nil>: 
	I0923 12:57:30.322535  682373 main.go:141] libmachine: (ha-097312-m02) KVM machine creation complete!
	I0923 12:57:30.322889  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetConfigRaw
	I0923 12:57:30.324198  682373 main.go:141] libmachine: (ha-097312-m02) Calling .DriverName
	I0923 12:57:30.325129  682373 main.go:141] libmachine: (ha-097312-m02) Calling .DriverName
	I0923 12:57:30.325321  682373 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 12:57:30.325347  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetState
	I0923 12:57:30.327097  682373 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 12:57:30.327120  682373 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 12:57:30.327127  682373 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 12:57:30.327136  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:30.330398  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.330831  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:30.330856  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.331084  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:30.331333  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:30.331567  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:30.331779  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:30.331980  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:57:30.332285  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0923 12:57:30.332308  682373 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 12:57:30.433384  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:57:30.433417  682373 main.go:141] libmachine: Detecting the provisioner...
	I0923 12:57:30.433425  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:30.436332  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.436753  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:30.436787  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.436960  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:30.437226  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:30.437407  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:30.437534  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:30.437680  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:57:30.437907  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0923 12:57:30.437921  682373 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 12:57:30.542610  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 12:57:30.542690  682373 main.go:141] libmachine: found compatible host: buildroot
	I0923 12:57:30.542698  682373 main.go:141] libmachine: Provisioning with buildroot...
	I0923 12:57:30.542708  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetMachineName
	I0923 12:57:30.543041  682373 buildroot.go:166] provisioning hostname "ha-097312-m02"
	I0923 12:57:30.543071  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetMachineName
	I0923 12:57:30.543236  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:30.546448  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.546897  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:30.546919  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.547099  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:30.547300  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:30.547478  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:30.547640  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:30.547814  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:57:30.548056  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0923 12:57:30.548076  682373 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-097312-m02 && echo "ha-097312-m02" | sudo tee /etc/hostname
	I0923 12:57:30.664801  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-097312-m02
	
	I0923 12:57:30.664827  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:30.668130  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.668523  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:30.668560  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.668734  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:30.668953  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:30.669161  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:30.669310  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:30.669479  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:57:30.669670  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0923 12:57:30.669692  682373 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-097312-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-097312-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-097312-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:57:30.782645  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:57:30.782678  682373 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19690-662205/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-662205/.minikube}
	I0923 12:57:30.782699  682373 buildroot.go:174] setting up certificates
	I0923 12:57:30.782714  682373 provision.go:84] configureAuth start
	I0923 12:57:30.782725  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetMachineName
	I0923 12:57:30.783040  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetIP
	I0923 12:57:30.785945  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.786433  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:30.786470  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.786603  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:30.788815  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.789202  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:30.789235  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.789394  682373 provision.go:143] copyHostCerts
	I0923 12:57:30.789433  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 12:57:30.789475  682373 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem, removing ...
	I0923 12:57:30.789485  682373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 12:57:30.789576  682373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem (1082 bytes)
	I0923 12:57:30.789670  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 12:57:30.789696  682373 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem, removing ...
	I0923 12:57:30.789707  682373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 12:57:30.789745  682373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem (1123 bytes)
	I0923 12:57:30.789814  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 12:57:30.789859  682373 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem, removing ...
	I0923 12:57:30.789868  682373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 12:57:30.789903  682373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem (1675 bytes)
	I0923 12:57:30.789977  682373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem org=jenkins.ha-097312-m02 san=[127.0.0.1 192.168.39.214 ha-097312-m02 localhost minikube]
	I0923 12:57:30.922412  682373 provision.go:177] copyRemoteCerts
	I0923 12:57:30.922481  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:57:30.922511  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:30.925683  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.926050  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:30.926084  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:30.926274  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:30.926483  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:30.926675  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:30.926797  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/id_rsa Username:docker}
	I0923 12:57:31.008599  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 12:57:31.008683  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 12:57:31.033933  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 12:57:31.034023  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 12:57:31.058490  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 12:57:31.058585  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 12:57:31.083172  682373 provision.go:87] duration metric: took 300.435238ms to configureAuth
	I0923 12:57:31.083208  682373 buildroot.go:189] setting minikube options for container-runtime
	I0923 12:57:31.083452  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:57:31.083557  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:31.086620  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.087006  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:31.087040  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.087226  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:31.087462  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:31.087673  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:31.087823  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:31.088047  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:57:31.088262  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0923 12:57:31.088294  682373 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 12:57:31.308105  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 12:57:31.308130  682373 main.go:141] libmachine: Checking connection to Docker...
	I0923 12:57:31.308138  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetURL
	I0923 12:57:31.309535  682373 main.go:141] libmachine: (ha-097312-m02) DBG | Using libvirt version 6000000
	I0923 12:57:31.312541  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.312973  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:31.313010  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.313204  682373 main.go:141] libmachine: Docker is up and running!
	I0923 12:57:31.313219  682373 main.go:141] libmachine: Reticulating splines...
	I0923 12:57:31.313229  682373 client.go:171] duration metric: took 23.76868403s to LocalClient.Create
	I0923 12:57:31.313256  682373 start.go:167] duration metric: took 23.768751533s to libmachine.API.Create "ha-097312"
	I0923 12:57:31.313265  682373 start.go:293] postStartSetup for "ha-097312-m02" (driver="kvm2")
	I0923 12:57:31.313279  682373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:57:31.313296  682373 main.go:141] libmachine: (ha-097312-m02) Calling .DriverName
	I0923 12:57:31.313570  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:57:31.313596  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:31.315984  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.316386  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:31.316408  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.316617  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:31.316830  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:31.316990  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:31.317121  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/id_rsa Username:docker}
	I0923 12:57:31.400827  682373 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:57:31.404978  682373 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 12:57:31.405008  682373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/addons for local assets ...
	I0923 12:57:31.405090  682373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/files for local assets ...
	I0923 12:57:31.405188  682373 filesync.go:149] local asset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> 6694472.pem in /etc/ssl/certs
	I0923 12:57:31.405202  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> /etc/ssl/certs/6694472.pem
	I0923 12:57:31.405345  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 12:57:31.415010  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 12:57:31.439229  682373 start.go:296] duration metric: took 125.945282ms for postStartSetup
	I0923 12:57:31.439312  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetConfigRaw
	I0923 12:57:31.439949  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetIP
	I0923 12:57:31.442989  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.443357  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:31.443391  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.443654  682373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 12:57:31.443870  682373 start.go:128] duration metric: took 23.918708009s to createHost
	I0923 12:57:31.443895  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:31.446222  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.446579  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:31.446608  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.446760  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:31.446969  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:31.447132  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:31.447282  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:31.447456  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:57:31.447638  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0923 12:57:31.447648  682373 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 12:57:31.550685  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727096251.508834892
	
	I0923 12:57:31.550719  682373 fix.go:216] guest clock: 1727096251.508834892
	I0923 12:57:31.550731  682373 fix.go:229] Guest: 2024-09-23 12:57:31.508834892 +0000 UTC Remote: 2024-09-23 12:57:31.443883765 +0000 UTC m=+69.652378832 (delta=64.951127ms)
	I0923 12:57:31.550757  682373 fix.go:200] guest clock delta is within tolerance: 64.951127ms
	I0923 12:57:31.550765  682373 start.go:83] releasing machines lock for "ha-097312-m02", held for 24.025730497s
	I0923 12:57:31.550798  682373 main.go:141] libmachine: (ha-097312-m02) Calling .DriverName
	I0923 12:57:31.551124  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetIP
	I0923 12:57:31.554365  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.554798  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:31.554829  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.557342  682373 out.go:177] * Found network options:
	I0923 12:57:31.558765  682373 out.go:177]   - NO_PROXY=192.168.39.160
	W0923 12:57:31.560271  682373 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 12:57:31.560309  682373 main.go:141] libmachine: (ha-097312-m02) Calling .DriverName
	I0923 12:57:31.561020  682373 main.go:141] libmachine: (ha-097312-m02) Calling .DriverName
	I0923 12:57:31.561228  682373 main.go:141] libmachine: (ha-097312-m02) Calling .DriverName
	I0923 12:57:31.561372  682373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 12:57:31.561417  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	W0923 12:57:31.561455  682373 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 12:57:31.561533  682373 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 12:57:31.561554  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHHostname
	I0923 12:57:31.564108  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.564231  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.564516  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:31.564549  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.564574  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:31.564586  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:31.564758  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:31.564856  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHPort
	I0923 12:57:31.564956  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:31.565019  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHKeyPath
	I0923 12:57:31.565102  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:31.565177  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetSSHUsername
	I0923 12:57:31.565238  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/id_rsa Username:docker}
	I0923 12:57:31.565280  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m02/id_rsa Username:docker}
	I0923 12:57:31.802089  682373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 12:57:31.808543  682373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 12:57:31.808622  682373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 12:57:31.824457  682373 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 12:57:31.824502  682373 start.go:495] detecting cgroup driver to use...
	I0923 12:57:31.824591  682373 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 12:57:31.842591  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:57:31.857349  682373 docker.go:217] disabling cri-docker service (if available) ...
	I0923 12:57:31.857432  682373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 12:57:31.871118  682373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 12:57:31.884433  682373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 12:57:31.998506  682373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 12:57:32.140771  682373 docker.go:233] disabling docker service ...
	I0923 12:57:32.140848  682373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 12:57:32.154917  682373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 12:57:32.167722  682373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 12:57:32.306721  682373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 12:57:32.442305  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 12:57:32.455563  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:57:32.473584  682373 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 12:57:32.473664  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:57:32.483856  682373 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 12:57:32.483926  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:57:32.493889  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:57:32.503832  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:57:32.514226  682373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 12:57:32.524620  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:57:32.534430  682373 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:57:32.550444  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:57:32.560917  682373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 12:57:32.570816  682373 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 12:57:32.570878  682373 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 12:57:32.583098  682373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 12:57:32.592948  682373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:57:32.720270  682373 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 12:57:32.812338  682373 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 12:57:32.812420  682373 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 12:57:32.817090  682373 start.go:563] Will wait 60s for crictl version
	I0923 12:57:32.817148  682373 ssh_runner.go:195] Run: which crictl
	I0923 12:57:32.820890  682373 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 12:57:32.862384  682373 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 12:57:32.862475  682373 ssh_runner.go:195] Run: crio --version
	I0923 12:57:32.889442  682373 ssh_runner.go:195] Run: crio --version
	I0923 12:57:32.919399  682373 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 12:57:32.921499  682373 out.go:177]   - env NO_PROXY=192.168.39.160
	I0923 12:57:32.923091  682373 main.go:141] libmachine: (ha-097312-m02) Calling .GetIP
	I0923 12:57:32.926243  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:32.926570  682373 main.go:141] libmachine: (ha-097312-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:9c:e4", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:57:21 +0000 UTC Type:0 Mac:52:54:00:aa:9c:e4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-097312-m02 Clientid:01:52:54:00:aa:9c:e4}
	I0923 12:57:32.926593  682373 main.go:141] libmachine: (ha-097312-m02) DBG | domain ha-097312-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:aa:9c:e4 in network mk-ha-097312
	I0923 12:57:32.926824  682373 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 12:57:32.930826  682373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:57:32.942746  682373 mustload.go:65] Loading cluster: ha-097312
	I0923 12:57:32.942993  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:57:32.943344  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:57:32.943396  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:57:32.959345  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45567
	I0923 12:57:32.959837  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:57:32.960440  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:57:32.960462  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:57:32.960839  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:57:32.961073  682373 main.go:141] libmachine: (ha-097312) Calling .GetState
	I0923 12:57:32.962981  682373 host.go:66] Checking if "ha-097312" exists ...
	I0923 12:57:32.963304  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:57:32.963359  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:57:32.979062  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33299
	I0923 12:57:32.979655  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:57:32.980147  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:57:32.980171  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:57:32.980553  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:57:32.980783  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:57:32.980997  682373 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312 for IP: 192.168.39.214
	I0923 12:57:32.981024  682373 certs.go:194] generating shared ca certs ...
	I0923 12:57:32.981042  682373 certs.go:226] acquiring lock for ca certs: {Name:mk5f47b34d40554f07f6507fea971236e4735d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:57:32.981215  682373 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key
	I0923 12:57:32.981259  682373 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key
	I0923 12:57:32.981266  682373 certs.go:256] generating profile certs ...
	I0923 12:57:32.981360  682373 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.key
	I0923 12:57:32.981395  682373 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.61cdc51f
	I0923 12:57:32.981420  682373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.61cdc51f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.160 192.168.39.214 192.168.39.254]
	I0923 12:57:33.071795  682373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.61cdc51f ...
	I0923 12:57:33.071829  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.61cdc51f: {Name:mk62bd79cb1d47d4e42d7ff40584a205e823ac92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:57:33.072049  682373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.61cdc51f ...
	I0923 12:57:33.072069  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.61cdc51f: {Name:mk7d02454991cfe0917d276979b247a33b0bbebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:57:33.072179  682373 certs.go:381] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.61cdc51f -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt
	I0923 12:57:33.072334  682373 certs.go:385] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.61cdc51f -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key
	I0923 12:57:33.072469  682373 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key
	I0923 12:57:33.072488  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 12:57:33.072504  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 12:57:33.072515  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 12:57:33.072525  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 12:57:33.072541  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 12:57:33.072553  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 12:57:33.072563  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 12:57:33.072575  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 12:57:33.072624  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem (1338 bytes)
	W0923 12:57:33.072650  682373 certs.go:480] ignoring /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447_empty.pem, impossibly tiny 0 bytes
	I0923 12:57:33.072659  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 12:57:33.072682  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem (1082 bytes)
	I0923 12:57:33.072703  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem (1123 bytes)
	I0923 12:57:33.072727  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem (1675 bytes)
	I0923 12:57:33.072766  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 12:57:33.072809  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> /usr/share/ca-certificates/6694472.pem
	I0923 12:57:33.072831  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:57:33.072841  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem -> /usr/share/ca-certificates/669447.pem
	I0923 12:57:33.072884  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:57:33.076209  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:57:33.076612  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:57:33.076643  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:57:33.076790  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:57:33.077013  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:57:33.077175  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:57:33.077328  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:57:33.154333  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0923 12:57:33.159047  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0923 12:57:33.170550  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0923 12:57:33.175236  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0923 12:57:33.186589  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0923 12:57:33.192195  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0923 12:57:33.206938  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0923 12:57:33.211432  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0923 12:57:33.222459  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0923 12:57:33.226550  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0923 12:57:33.237861  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0923 12:57:33.242413  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1671 bytes)
	I0923 12:57:33.252582  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 12:57:33.276338  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 12:57:33.301928  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 12:57:33.327107  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 12:57:33.353167  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0923 12:57:33.377281  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 12:57:33.401324  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 12:57:33.426736  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 12:57:33.451659  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /usr/share/ca-certificates/6694472.pem (1708 bytes)
	I0923 12:57:33.475444  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 12:57:33.500205  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem --> /usr/share/ca-certificates/669447.pem (1338 bytes)
	I0923 12:57:33.524995  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0923 12:57:33.542090  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0923 12:57:33.558637  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0923 12:57:33.577724  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0923 12:57:33.595235  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0923 12:57:33.613246  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1671 bytes)
	I0923 12:57:33.629756  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0923 12:57:33.646976  682373 ssh_runner.go:195] Run: openssl version
	I0923 12:57:33.652839  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 12:57:33.665921  682373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:57:33.671324  682373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 12:28 /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:57:33.671395  682373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:57:33.677752  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 12:57:33.688883  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669447.pem && ln -fs /usr/share/ca-certificates/669447.pem /etc/ssl/certs/669447.pem"
	I0923 12:57:33.699858  682373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669447.pem
	I0923 12:57:33.704184  682373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 12:47 /usr/share/ca-certificates/669447.pem
	I0923 12:57:33.704258  682373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669447.pem
	I0923 12:57:33.709888  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/669447.pem /etc/ssl/certs/51391683.0"
	I0923 12:57:33.720601  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6694472.pem && ln -fs /usr/share/ca-certificates/6694472.pem /etc/ssl/certs/6694472.pem"
	I0923 12:57:33.731770  682373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6694472.pem
	I0923 12:57:33.736581  682373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 12:47 /usr/share/ca-certificates/6694472.pem
	I0923 12:57:33.736662  682373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6694472.pem
	I0923 12:57:33.742744  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6694472.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 12:57:33.754098  682373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 12:57:33.758320  682373 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 12:57:33.758398  682373 kubeadm.go:934] updating node {m02 192.168.39.214 8443 v1.31.1 crio true true} ...
	I0923 12:57:33.758510  682373 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-097312-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 12:57:33.758543  682373 kube-vip.go:115] generating kube-vip config ...
	I0923 12:57:33.758604  682373 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 12:57:33.773852  682373 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 12:57:33.773946  682373 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0923 12:57:33.774016  682373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 12:57:33.784005  682373 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0923 12:57:33.784077  682373 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0923 12:57:33.795537  682373 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0923 12:57:33.795576  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 12:57:33.795628  682373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 12:57:33.795645  682373 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0923 12:57:33.795645  682373 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0923 12:57:33.800211  682373 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0923 12:57:33.800250  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0923 12:57:34.690726  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 12:57:34.690835  682373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 12:57:34.695973  682373 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0923 12:57:34.696015  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0923 12:57:34.821772  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:57:34.859449  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 12:57:34.859576  682373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 12:57:34.865043  682373 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0923 12:57:34.865081  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0923 12:57:35.467374  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0923 12:57:35.477615  682373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0923 12:57:35.494947  682373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 12:57:35.511461  682373 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0923 12:57:35.528089  682373 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0923 12:57:35.532321  682373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:57:35.545355  682373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:57:35.675932  682373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:57:35.693246  682373 host.go:66] Checking if "ha-097312" exists ...
	I0923 12:57:35.693787  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:57:35.693897  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:57:35.709354  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42031
	I0923 12:57:35.709824  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:57:35.710378  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:57:35.710405  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:57:35.710810  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:57:35.711063  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:57:35.711227  682373 start.go:317] joinCluster: &{Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:57:35.711360  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0923 12:57:35.711378  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:57:35.714477  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:57:35.714953  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:57:35.714989  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:57:35.715229  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:57:35.715442  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:57:35.715639  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:57:35.715775  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:57:35.872553  682373 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:57:35.872604  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xyxxia.g4s5n9l2o4j0fmlt --discovery-token-ca-cert-hash sha256:3fc29dc81bde6bbaef9ddbc91342eaa216189e2d814cc53e215aada75bebb1ff --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-097312-m02 --control-plane --apiserver-advertise-address=192.168.39.214 --apiserver-bind-port=8443"
	I0923 12:57:59.258533  682373 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xyxxia.g4s5n9l2o4j0fmlt --discovery-token-ca-cert-hash sha256:3fc29dc81bde6bbaef9ddbc91342eaa216189e2d814cc53e215aada75bebb1ff --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-097312-m02 --control-plane --apiserver-advertise-address=192.168.39.214 --apiserver-bind-port=8443": (23.385898049s)
	I0923 12:57:59.258586  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0923 12:57:59.796861  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-097312-m02 minikube.k8s.io/updated_at=2024_09_23T12_57_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=ha-097312 minikube.k8s.io/primary=false
	I0923 12:57:59.924798  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-097312-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0923 12:58:00.039331  682373 start.go:319] duration metric: took 24.32808596s to joinCluster
	I0923 12:58:00.039429  682373 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:58:00.039711  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:58:00.041025  682373 out.go:177] * Verifying Kubernetes components...
	I0923 12:58:00.042555  682373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:58:00.236705  682373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:58:00.254117  682373 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 12:58:00.254361  682373 kapi.go:59] client config for ha-097312: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.crt", KeyFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.key", CAFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0923 12:58:00.254428  682373 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.160:8443
	I0923 12:58:00.254651  682373 node_ready.go:35] waiting up to 6m0s for node "ha-097312-m02" to be "Ready" ...
	I0923 12:58:00.254771  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:00.254779  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:00.254788  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:00.254792  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:00.285534  682373 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I0923 12:58:00.755122  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:00.755151  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:00.755162  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:00.755168  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:00.759795  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:01.254994  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:01.255020  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:01.255029  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:01.255034  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:01.269257  682373 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0923 12:58:01.755083  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:01.755109  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:01.755117  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:01.755121  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:01.759623  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:02.255610  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:02.255632  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:02.255641  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:02.255645  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:02.259196  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:02.259691  682373 node_ready.go:53] node "ha-097312-m02" has status "Ready":"False"
	I0923 12:58:02.755738  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:02.755768  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:02.755777  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:02.755781  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:02.759269  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:03.255079  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:03.255106  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:03.255115  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:03.255120  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:03.259155  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:03.755217  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:03.755244  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:03.755251  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:03.755255  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:03.759086  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:04.255149  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:04.255177  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:04.255187  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:04.255193  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:04.259605  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:04.260038  682373 node_ready.go:53] node "ha-097312-m02" has status "Ready":"False"
	I0923 12:58:04.755404  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:04.755434  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:04.755446  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:04.755452  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:04.762670  682373 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:58:05.255127  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:05.255157  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:05.255166  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:05.255172  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:05.259007  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:05.755425  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:05.755458  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:05.755470  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:05.755475  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:05.759105  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:06.255090  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:06.255119  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:06.255128  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:06.255134  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:06.259815  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:06.260439  682373 node_ready.go:53] node "ha-097312-m02" has status "Ready":"False"
	I0923 12:58:06.755181  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:06.755209  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:06.755219  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:06.755226  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:06.758768  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:07.255412  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:07.255447  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:07.255458  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:07.255466  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:07.258578  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:07.755939  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:07.755966  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:07.755975  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:07.755978  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:07.759564  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:08.255677  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:08.255716  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:08.255730  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:08.255735  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:08.259088  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:08.754970  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:08.755000  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:08.755012  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:08.755020  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:08.758314  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:08.758910  682373 node_ready.go:53] node "ha-097312-m02" has status "Ready":"False"
	I0923 12:58:09.256074  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:09.256105  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:09.256115  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:09.256120  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:09.259267  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:09.754981  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:09.755005  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:09.755014  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:09.755019  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:09.758517  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:10.255140  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:10.255164  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:10.255173  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:10.255178  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:10.261151  682373 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:58:10.755682  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:10.755711  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:10.755722  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:10.755728  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:10.759364  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:10.759961  682373 node_ready.go:53] node "ha-097312-m02" has status "Ready":"False"
	I0923 12:58:11.255328  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:11.255355  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:11.255363  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:11.255367  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:11.259613  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:11.755288  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:11.755316  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:11.755331  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:11.755336  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:11.759266  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:12.255138  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:12.255270  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:12.255308  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:12.255317  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:12.259134  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:12.755572  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:12.755596  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:12.755604  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:12.755610  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:12.758861  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:13.255907  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:13.255934  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:13.255942  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:13.255946  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:13.259259  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:13.259818  682373 node_ready.go:53] node "ha-097312-m02" has status "Ready":"False"
	I0923 12:58:13.755217  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:13.755243  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:13.755251  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:13.755255  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:13.759226  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:14.255176  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:14.255208  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:14.255219  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:14.255226  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:14.258744  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:14.755918  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:14.755946  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:14.755953  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:14.755957  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:14.759652  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:15.255703  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:15.255732  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:15.255745  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:15.255754  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:15.259193  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:15.755854  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:15.755888  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:15.755896  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:15.755900  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:15.759137  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:15.759696  682373 node_ready.go:53] node "ha-097312-m02" has status "Ready":"False"
	I0923 12:58:16.255882  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:16.255910  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:16.255918  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:16.255922  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:16.259597  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:16.755835  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:16.755869  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:16.755887  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:16.755896  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:16.759860  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:17.255730  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:17.255754  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:17.255769  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:17.255773  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:17.259628  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:17.755085  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:17.755111  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:17.755119  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:17.755124  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:17.759249  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:17.759743  682373 node_ready.go:53] node "ha-097312-m02" has status "Ready":"False"
	I0923 12:58:18.255184  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:18.255211  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.255225  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.255242  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.259648  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:18.754896  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:18.754921  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.754930  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.754935  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.759143  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:18.759759  682373 node_ready.go:49] node "ha-097312-m02" has status "Ready":"True"
	I0923 12:58:18.759779  682373 node_ready.go:38] duration metric: took 18.505092333s for node "ha-097312-m02" to be "Ready" ...
	I0923 12:58:18.759789  682373 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:58:18.759872  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:58:18.759882  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.759890  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.759895  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.765186  682373 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:58:18.771234  682373 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6g9x2" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:18.771365  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6g9x2
	I0923 12:58:18.771376  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.771387  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.771396  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.775100  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:18.775960  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:18.775983  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.775993  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.776003  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.779024  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:18.779526  682373 pod_ready.go:93] pod "coredns-7c65d6cfc9-6g9x2" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:18.779547  682373 pod_ready.go:82] duration metric: took 8.277628ms for pod "coredns-7c65d6cfc9-6g9x2" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:18.779561  682373 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-txcxz" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:18.779632  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-txcxz
	I0923 12:58:18.779642  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.779652  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.779659  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.782895  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:18.783552  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:18.783573  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.783582  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.783588  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.786568  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:58:18.787170  682373 pod_ready.go:93] pod "coredns-7c65d6cfc9-txcxz" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:18.787189  682373 pod_ready.go:82] duration metric: took 7.619712ms for pod "coredns-7c65d6cfc9-txcxz" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:18.787202  682373 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:18.787274  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/etcd-ha-097312
	I0923 12:58:18.787284  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.787295  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.787303  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.792015  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:18.792787  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:18.792809  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.792820  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.792826  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.796338  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:18.796833  682373 pod_ready.go:93] pod "etcd-ha-097312" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:18.796854  682373 pod_ready.go:82] duration metric: took 9.643589ms for pod "etcd-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:18.796863  682373 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:18.796938  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/etcd-ha-097312-m02
	I0923 12:58:18.796951  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.796958  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.796962  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.800096  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:18.800646  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:18.800664  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.800675  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.800680  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.803250  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:58:18.803795  682373 pod_ready.go:93] pod "etcd-ha-097312-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:18.803820  682373 pod_ready.go:82] duration metric: took 6.946045ms for pod "etcd-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:18.803842  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:18.955292  682373 request.go:632] Waited for 151.365865ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312
	I0923 12:58:18.955373  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312
	I0923 12:58:18.955378  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:18.955388  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:18.955394  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:18.959155  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:19.155346  682373 request.go:632] Waited for 195.422034ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:19.155457  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:19.155466  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:19.155481  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:19.155491  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:19.158847  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:19.159413  682373 pod_ready.go:93] pod "kube-apiserver-ha-097312" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:19.159433  682373 pod_ready.go:82] duration metric: took 355.582451ms for pod "kube-apiserver-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:19.159446  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:19.355524  682373 request.go:632] Waited for 195.972937ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312-m02
	I0923 12:58:19.355603  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312-m02
	I0923 12:58:19.355611  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:19.355624  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:19.355634  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:19.358947  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:19.555060  682373 request.go:632] Waited for 195.299012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:19.555156  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:19.555165  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:19.555173  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:19.555180  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:19.558664  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:19.559169  682373 pod_ready.go:93] pod "kube-apiserver-ha-097312-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:19.559189  682373 pod_ready.go:82] duration metric: took 399.735219ms for pod "kube-apiserver-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:19.559199  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:19.755252  682373 request.go:632] Waited for 195.975758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312
	I0923 12:58:19.755347  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312
	I0923 12:58:19.755367  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:19.755395  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:19.755406  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:19.759281  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:19.955410  682373 request.go:632] Waited for 195.442789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:19.955490  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:19.955495  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:19.955504  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:19.955551  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:19.960116  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:19.960952  682373 pod_ready.go:93] pod "kube-controller-manager-ha-097312" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:19.960978  682373 pod_ready.go:82] duration metric: took 401.771647ms for pod "kube-controller-manager-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:19.960989  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:20.155181  682373 request.go:632] Waited for 194.10652ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312-m02
	I0923 12:58:20.155288  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312-m02
	I0923 12:58:20.155299  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:20.155307  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:20.155311  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:20.158904  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:20.355343  682373 request.go:632] Waited for 195.400275ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:20.355420  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:20.355425  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:20.355434  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:20.355440  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:20.358631  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:20.359159  682373 pod_ready.go:93] pod "kube-controller-manager-ha-097312-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:20.359188  682373 pod_ready.go:82] duration metric: took 398.191037ms for pod "kube-controller-manager-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:20.359202  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-drj8m" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:20.555330  682373 request.go:632] Waited for 196.021107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-drj8m
	I0923 12:58:20.555406  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-drj8m
	I0923 12:58:20.555412  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:20.555420  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:20.555430  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:20.559151  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:20.755254  682373 request.go:632] Waited for 195.454293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:20.755335  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:20.755340  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:20.755347  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:20.755351  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:20.759445  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:20.760118  682373 pod_ready.go:93] pod "kube-proxy-drj8m" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:20.760139  682373 pod_ready.go:82] duration metric: took 400.929533ms for pod "kube-proxy-drj8m" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:20.760148  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z6ss5" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:20.955378  682373 request.go:632] Waited for 195.139639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z6ss5
	I0923 12:58:20.955478  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z6ss5
	I0923 12:58:20.955488  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:20.955496  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:20.955517  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:20.959839  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:21.155010  682373 request.go:632] Waited for 194.343151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:21.155079  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:21.155084  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:21.155092  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:21.155096  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:21.158450  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:21.158954  682373 pod_ready.go:93] pod "kube-proxy-z6ss5" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:21.158974  682373 pod_ready.go:82] duration metric: took 398.819585ms for pod "kube-proxy-z6ss5" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:21.158984  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:21.355051  682373 request.go:632] Waited for 195.979167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312
	I0923 12:58:21.355148  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312
	I0923 12:58:21.355153  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:21.355161  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:21.355166  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:21.359586  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:21.554981  682373 request.go:632] Waited for 194.336515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:21.555072  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:58:21.555080  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:21.555090  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:21.555099  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:21.558426  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:21.558962  682373 pod_ready.go:93] pod "kube-scheduler-ha-097312" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:21.558988  682373 pod_ready.go:82] duration metric: took 399.997577ms for pod "kube-scheduler-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:21.558999  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:21.755254  682373 request.go:632] Waited for 196.12462ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312-m02
	I0923 12:58:21.755345  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312-m02
	I0923 12:58:21.755351  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:21.755359  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:21.755363  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:21.759215  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:58:21.955895  682373 request.go:632] Waited for 196.121213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:21.955983  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:58:21.955989  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:21.955996  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:21.956001  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:21.960399  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:21.960900  682373 pod_ready.go:93] pod "kube-scheduler-ha-097312-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:58:21.960922  682373 pod_ready.go:82] duration metric: took 401.915303ms for pod "kube-scheduler-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:58:21.960933  682373 pod_ready.go:39] duration metric: took 3.201132427s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:58:21.960950  682373 api_server.go:52] waiting for apiserver process to appear ...
	I0923 12:58:21.961025  682373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:58:21.980626  682373 api_server.go:72] duration metric: took 21.941154667s to wait for apiserver process to appear ...
	I0923 12:58:21.980660  682373 api_server.go:88] waiting for apiserver healthz status ...
	I0923 12:58:21.980684  682373 api_server.go:253] Checking apiserver healthz at https://192.168.39.160:8443/healthz ...
	I0923 12:58:21.985481  682373 api_server.go:279] https://192.168.39.160:8443/healthz returned 200:
	ok
	I0923 12:58:21.985563  682373 round_trippers.go:463] GET https://192.168.39.160:8443/version
	I0923 12:58:21.985574  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:21.985582  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:21.985586  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:21.986808  682373 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 12:58:21.987069  682373 api_server.go:141] control plane version: v1.31.1
	I0923 12:58:21.987104  682373 api_server.go:131] duration metric: took 6.43733ms to wait for apiserver health ...
	I0923 12:58:21.987113  682373 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 12:58:22.155587  682373 request.go:632] Waited for 168.378674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:58:22.155651  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:58:22.155657  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:22.155665  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:22.155669  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:22.166855  682373 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0923 12:58:22.174103  682373 system_pods.go:59] 17 kube-system pods found
	I0923 12:58:22.174149  682373 system_pods.go:61] "coredns-7c65d6cfc9-6g9x2" [af485e47-0e78-483e-8f35-a7a4ab53f014] Running
	I0923 12:58:22.174157  682373 system_pods.go:61] "coredns-7c65d6cfc9-txcxz" [e6da5f25-f232-4649-9801-f3577210ea2e] Running
	I0923 12:58:22.174164  682373 system_pods.go:61] "etcd-ha-097312" [7f27c05d-176f-4397-8966-a2cc29556265] Running
	I0923 12:58:22.174170  682373 system_pods.go:61] "etcd-ha-097312-m02" [50d4b55f-31d3-4351-8574-506bbc4167d6] Running
	I0923 12:58:22.174176  682373 system_pods.go:61] "kindnet-hcclj" [0e57c02a-6f9f-4829-9838-6bed660540a4] Running
	I0923 12:58:22.174182  682373 system_pods.go:61] "kindnet-j8l5t" [49216705-6e85-4b98-afbd-f4228b774321] Running
	I0923 12:58:22.174188  682373 system_pods.go:61] "kube-apiserver-ha-097312" [4b8954a1-188a-4734-8e79-eace293c35e9] Running
	I0923 12:58:22.174194  682373 system_pods.go:61] "kube-apiserver-ha-097312-m02" [6022c193-400e-4641-8c4d-d24f0ce3e6ea] Running
	I0923 12:58:22.174199  682373 system_pods.go:61] "kube-controller-manager-ha-097312" [c085db05-26f3-471b-baf1-f90cbfdacf19] Running
	I0923 12:58:22.174205  682373 system_pods.go:61] "kube-controller-manager-ha-097312-m02" [4cc903b8-c0c1-4ef7-9338-44af86be9280] Running
	I0923 12:58:22.174214  682373 system_pods.go:61] "kube-proxy-drj8m" [a1c5535e-7139-441f-9065-ef7d147582d2] Running
	I0923 12:58:22.174226  682373 system_pods.go:61] "kube-proxy-z6ss5" [7bff6204-a427-48c8-83a3-448ff1328b1b] Running
	I0923 12:58:22.174233  682373 system_pods.go:61] "kube-scheduler-ha-097312" [408ec8ae-eeca-4026-9582-45e7d209f09c] Running
	I0923 12:58:22.174240  682373 system_pods.go:61] "kube-scheduler-ha-097312-m02" [71e7793e-3d21-476a-84de-6fc84631e313] Running
	I0923 12:58:22.174247  682373 system_pods.go:61] "kube-vip-ha-097312" [b26dfdf8-fa4b-4822-a88c-fe7af53be81b] Running
	I0923 12:58:22.174253  682373 system_pods.go:61] "kube-vip-ha-097312-m02" [910ae281-c533-4aa6-acb0-c1b69dddd842] Running
	I0923 12:58:22.174264  682373 system_pods.go:61] "storage-provisioner" [0bbda806-091c-4e48-982a-296bbf03abd6] Running
	I0923 12:58:22.174277  682373 system_pods.go:74] duration metric: took 187.156047ms to wait for pod list to return data ...
	I0923 12:58:22.174293  682373 default_sa.go:34] waiting for default service account to be created ...
	I0923 12:58:22.355843  682373 request.go:632] Waited for 181.449658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/default/serviceaccounts
	I0923 12:58:22.355909  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/default/serviceaccounts
	I0923 12:58:22.355914  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:22.355922  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:22.355927  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:22.360440  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:22.360699  682373 default_sa.go:45] found service account: "default"
	I0923 12:58:22.360716  682373 default_sa.go:55] duration metric: took 186.414512ms for default service account to be created ...
	I0923 12:58:22.360725  682373 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 12:58:22.555206  682373 request.go:632] Waited for 194.405433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:58:22.555295  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:58:22.555301  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:22.555308  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:22.555316  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:22.560454  682373 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:58:22.566018  682373 system_pods.go:86] 17 kube-system pods found
	I0923 12:58:22.566047  682373 system_pods.go:89] "coredns-7c65d6cfc9-6g9x2" [af485e47-0e78-483e-8f35-a7a4ab53f014] Running
	I0923 12:58:22.566053  682373 system_pods.go:89] "coredns-7c65d6cfc9-txcxz" [e6da5f25-f232-4649-9801-f3577210ea2e] Running
	I0923 12:58:22.566057  682373 system_pods.go:89] "etcd-ha-097312" [7f27c05d-176f-4397-8966-a2cc29556265] Running
	I0923 12:58:22.566061  682373 system_pods.go:89] "etcd-ha-097312-m02" [50d4b55f-31d3-4351-8574-506bbc4167d6] Running
	I0923 12:58:22.566064  682373 system_pods.go:89] "kindnet-hcclj" [0e57c02a-6f9f-4829-9838-6bed660540a4] Running
	I0923 12:58:22.566068  682373 system_pods.go:89] "kindnet-j8l5t" [49216705-6e85-4b98-afbd-f4228b774321] Running
	I0923 12:58:22.566072  682373 system_pods.go:89] "kube-apiserver-ha-097312" [4b8954a1-188a-4734-8e79-eace293c35e9] Running
	I0923 12:58:22.566075  682373 system_pods.go:89] "kube-apiserver-ha-097312-m02" [6022c193-400e-4641-8c4d-d24f0ce3e6ea] Running
	I0923 12:58:22.566079  682373 system_pods.go:89] "kube-controller-manager-ha-097312" [c085db05-26f3-471b-baf1-f90cbfdacf19] Running
	I0923 12:58:22.566083  682373 system_pods.go:89] "kube-controller-manager-ha-097312-m02" [4cc903b8-c0c1-4ef7-9338-44af86be9280] Running
	I0923 12:58:22.566086  682373 system_pods.go:89] "kube-proxy-drj8m" [a1c5535e-7139-441f-9065-ef7d147582d2] Running
	I0923 12:58:22.566090  682373 system_pods.go:89] "kube-proxy-z6ss5" [7bff6204-a427-48c8-83a3-448ff1328b1b] Running
	I0923 12:58:22.566093  682373 system_pods.go:89] "kube-scheduler-ha-097312" [408ec8ae-eeca-4026-9582-45e7d209f09c] Running
	I0923 12:58:22.566097  682373 system_pods.go:89] "kube-scheduler-ha-097312-m02" [71e7793e-3d21-476a-84de-6fc84631e313] Running
	I0923 12:58:22.566100  682373 system_pods.go:89] "kube-vip-ha-097312" [b26dfdf8-fa4b-4822-a88c-fe7af53be81b] Running
	I0923 12:58:22.566103  682373 system_pods.go:89] "kube-vip-ha-097312-m02" [910ae281-c533-4aa6-acb0-c1b69dddd842] Running
	I0923 12:58:22.566106  682373 system_pods.go:89] "storage-provisioner" [0bbda806-091c-4e48-982a-296bbf03abd6] Running
	I0923 12:58:22.566112  682373 system_pods.go:126] duration metric: took 205.38119ms to wait for k8s-apps to be running ...
	I0923 12:58:22.566121  682373 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 12:58:22.566168  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:58:22.581419  682373 system_svc.go:56] duration metric: took 15.287038ms WaitForService to wait for kubelet
	I0923 12:58:22.581451  682373 kubeadm.go:582] duration metric: took 22.541987533s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:58:22.581470  682373 node_conditions.go:102] verifying NodePressure condition ...
	I0923 12:58:22.755938  682373 request.go:632] Waited for 174.364793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes
	I0923 12:58:22.756006  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes
	I0923 12:58:22.756011  682373 round_trippers.go:469] Request Headers:
	I0923 12:58:22.756019  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:58:22.756027  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:58:22.760246  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:58:22.760965  682373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:58:22.760989  682373 node_conditions.go:123] node cpu capacity is 2
	I0923 12:58:22.761000  682373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:58:22.761004  682373 node_conditions.go:123] node cpu capacity is 2
	I0923 12:58:22.761010  682373 node_conditions.go:105] duration metric: took 179.533922ms to run NodePressure ...
	I0923 12:58:22.761032  682373 start.go:241] waiting for startup goroutines ...
	I0923 12:58:22.761061  682373 start.go:255] writing updated cluster config ...
	I0923 12:58:22.763224  682373 out.go:201] 
	I0923 12:58:22.764656  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:58:22.764766  682373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 12:58:22.766263  682373 out.go:177] * Starting "ha-097312-m03" control-plane node in "ha-097312" cluster
	I0923 12:58:22.767263  682373 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 12:58:22.767288  682373 cache.go:56] Caching tarball of preloaded images
	I0923 12:58:22.767425  682373 preload.go:172] Found /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 12:58:22.767438  682373 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 12:58:22.767549  682373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 12:58:22.767768  682373 start.go:360] acquireMachinesLock for ha-097312-m03: {Name:mka98570d4b4becad22300323f1f88e64743eec3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 12:58:22.767826  682373 start.go:364] duration metric: took 34.115µs to acquireMachinesLock for "ha-097312-m03"
	I0923 12:58:22.767850  682373 start.go:93] Provisioning new machine with config: &{Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekto
r-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:58:22.767994  682373 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0923 12:58:22.769439  682373 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 12:58:22.769539  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:58:22.769588  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:58:22.784952  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35991
	I0923 12:58:22.785373  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:58:22.785878  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:58:22.785904  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:58:22.786220  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:58:22.786438  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetMachineName
	I0923 12:58:22.786607  682373 main.go:141] libmachine: (ha-097312-m03) Calling .DriverName
	I0923 12:58:22.786798  682373 start.go:159] libmachine.API.Create for "ha-097312" (driver="kvm2")
	I0923 12:58:22.786843  682373 client.go:168] LocalClient.Create starting
	I0923 12:58:22.786909  682373 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem
	I0923 12:58:22.786967  682373 main.go:141] libmachine: Decoding PEM data...
	I0923 12:58:22.786989  682373 main.go:141] libmachine: Parsing certificate...
	I0923 12:58:22.787065  682373 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem
	I0923 12:58:22.787087  682373 main.go:141] libmachine: Decoding PEM data...
	I0923 12:58:22.787098  682373 main.go:141] libmachine: Parsing certificate...
	I0923 12:58:22.787116  682373 main.go:141] libmachine: Running pre-create checks...
	I0923 12:58:22.787123  682373 main.go:141] libmachine: (ha-097312-m03) Calling .PreCreateCheck
	I0923 12:58:22.787356  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetConfigRaw
	I0923 12:58:22.787880  682373 main.go:141] libmachine: Creating machine...
	I0923 12:58:22.787894  682373 main.go:141] libmachine: (ha-097312-m03) Calling .Create
	I0923 12:58:22.788064  682373 main.go:141] libmachine: (ha-097312-m03) Creating KVM machine...
	I0923 12:58:22.789249  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found existing default KVM network
	I0923 12:58:22.789434  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found existing private KVM network mk-ha-097312
	I0923 12:58:22.789576  682373 main.go:141] libmachine: (ha-097312-m03) Setting up store path in /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03 ...
	I0923 12:58:22.789598  682373 main.go:141] libmachine: (ha-097312-m03) Building disk image from file:///home/jenkins/minikube-integration/19690-662205/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 12:58:22.789697  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:22.789573  683157 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:58:22.789778  682373 main.go:141] libmachine: (ha-097312-m03) Downloading /home/jenkins/minikube-integration/19690-662205/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19690-662205/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 12:58:23.067488  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:23.067344  683157 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa...
	I0923 12:58:23.227591  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:23.227420  683157 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/ha-097312-m03.rawdisk...
	I0923 12:58:23.227631  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Writing magic tar header
	I0923 12:58:23.227668  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Writing SSH key tar header
	I0923 12:58:23.227688  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:23.227552  683157 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03 ...
	I0923 12:58:23.227701  682373 main.go:141] libmachine: (ha-097312-m03) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03 (perms=drwx------)
	I0923 12:58:23.227722  682373 main.go:141] libmachine: (ha-097312-m03) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube/machines (perms=drwxr-xr-x)
	I0923 12:58:23.227735  682373 main.go:141] libmachine: (ha-097312-m03) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube (perms=drwxr-xr-x)
	I0923 12:58:23.227750  682373 main.go:141] libmachine: (ha-097312-m03) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205 (perms=drwxrwxr-x)
	I0923 12:58:23.227770  682373 main.go:141] libmachine: (ha-097312-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 12:58:23.227784  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03
	I0923 12:58:23.227800  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube/machines
	I0923 12:58:23.227813  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:58:23.227827  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205
	I0923 12:58:23.227839  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 12:58:23.227850  682373 main.go:141] libmachine: (ha-097312-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 12:58:23.227887  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Checking permissions on dir: /home/jenkins
	I0923 12:58:23.227917  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Checking permissions on dir: /home
	I0923 12:58:23.227930  682373 main.go:141] libmachine: (ha-097312-m03) Creating domain...
	I0923 12:58:23.227949  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Skipping /home - not owner
	I0923 12:58:23.228646  682373 main.go:141] libmachine: (ha-097312-m03) define libvirt domain using xml: 
	I0923 12:58:23.228661  682373 main.go:141] libmachine: (ha-097312-m03) <domain type='kvm'>
	I0923 12:58:23.228669  682373 main.go:141] libmachine: (ha-097312-m03)   <name>ha-097312-m03</name>
	I0923 12:58:23.228688  682373 main.go:141] libmachine: (ha-097312-m03)   <memory unit='MiB'>2200</memory>
	I0923 12:58:23.228717  682373 main.go:141] libmachine: (ha-097312-m03)   <vcpu>2</vcpu>
	I0923 12:58:23.228738  682373 main.go:141] libmachine: (ha-097312-m03)   <features>
	I0923 12:58:23.228750  682373 main.go:141] libmachine: (ha-097312-m03)     <acpi/>
	I0923 12:58:23.228767  682373 main.go:141] libmachine: (ha-097312-m03)     <apic/>
	I0923 12:58:23.228781  682373 main.go:141] libmachine: (ha-097312-m03)     <pae/>
	I0923 12:58:23.228788  682373 main.go:141] libmachine: (ha-097312-m03)     
	I0923 12:58:23.228798  682373 main.go:141] libmachine: (ha-097312-m03)   </features>
	I0923 12:58:23.228813  682373 main.go:141] libmachine: (ha-097312-m03)   <cpu mode='host-passthrough'>
	I0923 12:58:23.228824  682373 main.go:141] libmachine: (ha-097312-m03)   
	I0923 12:58:23.228832  682373 main.go:141] libmachine: (ha-097312-m03)   </cpu>
	I0923 12:58:23.228843  682373 main.go:141] libmachine: (ha-097312-m03)   <os>
	I0923 12:58:23.228853  682373 main.go:141] libmachine: (ha-097312-m03)     <type>hvm</type>
	I0923 12:58:23.228866  682373 main.go:141] libmachine: (ha-097312-m03)     <boot dev='cdrom'/>
	I0923 12:58:23.228881  682373 main.go:141] libmachine: (ha-097312-m03)     <boot dev='hd'/>
	I0923 12:58:23.228893  682373 main.go:141] libmachine: (ha-097312-m03)     <bootmenu enable='no'/>
	I0923 12:58:23.228902  682373 main.go:141] libmachine: (ha-097312-m03)   </os>
	I0923 12:58:23.228911  682373 main.go:141] libmachine: (ha-097312-m03)   <devices>
	I0923 12:58:23.228922  682373 main.go:141] libmachine: (ha-097312-m03)     <disk type='file' device='cdrom'>
	I0923 12:58:23.228960  682373 main.go:141] libmachine: (ha-097312-m03)       <source file='/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/boot2docker.iso'/>
	I0923 12:58:23.228987  682373 main.go:141] libmachine: (ha-097312-m03)       <target dev='hdc' bus='scsi'/>
	I0923 12:58:23.228998  682373 main.go:141] libmachine: (ha-097312-m03)       <readonly/>
	I0923 12:58:23.229011  682373 main.go:141] libmachine: (ha-097312-m03)     </disk>
	I0923 12:58:23.229023  682373 main.go:141] libmachine: (ha-097312-m03)     <disk type='file' device='disk'>
	I0923 12:58:23.229035  682373 main.go:141] libmachine: (ha-097312-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 12:58:23.229050  682373 main.go:141] libmachine: (ha-097312-m03)       <source file='/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/ha-097312-m03.rawdisk'/>
	I0923 12:58:23.229060  682373 main.go:141] libmachine: (ha-097312-m03)       <target dev='hda' bus='virtio'/>
	I0923 12:58:23.229070  682373 main.go:141] libmachine: (ha-097312-m03)     </disk>
	I0923 12:58:23.229081  682373 main.go:141] libmachine: (ha-097312-m03)     <interface type='network'>
	I0923 12:58:23.229090  682373 main.go:141] libmachine: (ha-097312-m03)       <source network='mk-ha-097312'/>
	I0923 12:58:23.229114  682373 main.go:141] libmachine: (ha-097312-m03)       <model type='virtio'/>
	I0923 12:58:23.229140  682373 main.go:141] libmachine: (ha-097312-m03)     </interface>
	I0923 12:58:23.229160  682373 main.go:141] libmachine: (ha-097312-m03)     <interface type='network'>
	I0923 12:58:23.229172  682373 main.go:141] libmachine: (ha-097312-m03)       <source network='default'/>
	I0923 12:58:23.229186  682373 main.go:141] libmachine: (ha-097312-m03)       <model type='virtio'/>
	I0923 12:58:23.229197  682373 main.go:141] libmachine: (ha-097312-m03)     </interface>
	I0923 12:58:23.229203  682373 main.go:141] libmachine: (ha-097312-m03)     <serial type='pty'>
	I0923 12:58:23.229214  682373 main.go:141] libmachine: (ha-097312-m03)       <target port='0'/>
	I0923 12:58:23.229223  682373 main.go:141] libmachine: (ha-097312-m03)     </serial>
	I0923 12:58:23.229232  682373 main.go:141] libmachine: (ha-097312-m03)     <console type='pty'>
	I0923 12:58:23.229242  682373 main.go:141] libmachine: (ha-097312-m03)       <target type='serial' port='0'/>
	I0923 12:58:23.229252  682373 main.go:141] libmachine: (ha-097312-m03)     </console>
	I0923 12:58:23.229264  682373 main.go:141] libmachine: (ha-097312-m03)     <rng model='virtio'>
	I0923 12:58:23.229283  682373 main.go:141] libmachine: (ha-097312-m03)       <backend model='random'>/dev/random</backend>
	I0923 12:58:23.229301  682373 main.go:141] libmachine: (ha-097312-m03)     </rng>
	I0923 12:58:23.229309  682373 main.go:141] libmachine: (ha-097312-m03)     
	I0923 12:58:23.229315  682373 main.go:141] libmachine: (ha-097312-m03)     
	I0923 12:58:23.229321  682373 main.go:141] libmachine: (ha-097312-m03)   </devices>
	I0923 12:58:23.229324  682373 main.go:141] libmachine: (ha-097312-m03) </domain>
	I0923 12:58:23.229331  682373 main.go:141] libmachine: (ha-097312-m03) 
	I0923 12:58:23.236443  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:ba:f1:b5 in network default
	I0923 12:58:23.237006  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:23.237021  682373 main.go:141] libmachine: (ha-097312-m03) Ensuring networks are active...
	I0923 12:58:23.237857  682373 main.go:141] libmachine: (ha-097312-m03) Ensuring network default is active
	I0923 12:58:23.238229  682373 main.go:141] libmachine: (ha-097312-m03) Ensuring network mk-ha-097312 is active
	I0923 12:58:23.238611  682373 main.go:141] libmachine: (ha-097312-m03) Getting domain xml...
	I0923 12:58:23.239268  682373 main.go:141] libmachine: (ha-097312-m03) Creating domain...
	I0923 12:58:24.490717  682373 main.go:141] libmachine: (ha-097312-m03) Waiting to get IP...
	I0923 12:58:24.491571  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:24.492070  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:24.492095  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:24.492045  683157 retry.go:31] will retry after 248.750792ms: waiting for machine to come up
	I0923 12:58:24.742884  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:24.743526  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:24.743556  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:24.743474  683157 retry.go:31] will retry after 255.093938ms: waiting for machine to come up
	I0923 12:58:24.999946  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:25.000409  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:25.000437  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:25.000354  683157 retry.go:31] will retry after 366.076555ms: waiting for machine to come up
	I0923 12:58:25.367854  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:25.368400  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:25.368423  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:25.368345  683157 retry.go:31] will retry after 602.474157ms: waiting for machine to come up
	I0923 12:58:25.972258  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:25.972737  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:25.972759  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:25.972695  683157 retry.go:31] will retry after 694.585684ms: waiting for machine to come up
	I0923 12:58:26.668534  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:26.668902  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:26.668929  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:26.668869  683157 retry.go:31] will retry after 679.770142ms: waiting for machine to come up
	I0923 12:58:27.350837  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:27.351322  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:27.351348  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:27.351244  683157 retry.go:31] will retry after 724.740855ms: waiting for machine to come up
	I0923 12:58:28.077164  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:28.077637  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:28.077666  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:28.077575  683157 retry.go:31] will retry after 928.712628ms: waiting for machine to come up
	I0923 12:58:29.008154  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:29.008550  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:29.008579  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:29.008504  683157 retry.go:31] will retry after 1.450407892s: waiting for machine to come up
	I0923 12:58:30.461271  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:30.461634  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:30.461657  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:30.461609  683157 retry.go:31] will retry after 1.972612983s: waiting for machine to come up
	I0923 12:58:32.435439  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:32.435994  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:32.436026  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:32.435936  683157 retry.go:31] will retry after 2.428412852s: waiting for machine to come up
	I0923 12:58:34.866973  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:34.867442  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:34.867469  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:34.867396  683157 retry.go:31] will retry after 3.321760424s: waiting for machine to come up
	I0923 12:58:38.190761  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:38.191232  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:38.191259  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:38.191169  683157 retry.go:31] will retry after 3.240294118s: waiting for machine to come up
	I0923 12:58:41.435372  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:41.435812  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find current IP address of domain ha-097312-m03 in network mk-ha-097312
	I0923 12:58:41.435833  682373 main.go:141] libmachine: (ha-097312-m03) DBG | I0923 12:58:41.435772  683157 retry.go:31] will retry after 4.450333931s: waiting for machine to come up
	I0923 12:58:45.888567  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:45.889089  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has current primary IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:45.889129  682373 main.go:141] libmachine: (ha-097312-m03) Found IP for machine: 192.168.39.174
	I0923 12:58:45.889152  682373 main.go:141] libmachine: (ha-097312-m03) Reserving static IP address...
	I0923 12:58:45.889591  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find host DHCP lease matching {name: "ha-097312-m03", mac: "52:54:00:39:fc:65", ip: "192.168.39.174"} in network mk-ha-097312
	I0923 12:58:45.977147  682373 main.go:141] libmachine: (ha-097312-m03) Reserved static IP address: 192.168.39.174
	I0923 12:58:45.977177  682373 main.go:141] libmachine: (ha-097312-m03) Waiting for SSH to be available...
	I0923 12:58:45.977199  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Getting to WaitForSSH function...
	I0923 12:58:45.980053  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:45.980585  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312
	I0923 12:58:45.980626  682373 main.go:141] libmachine: (ha-097312-m03) DBG | unable to find defined IP address of network mk-ha-097312 interface with MAC address 52:54:00:39:fc:65
	I0923 12:58:45.980767  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Using SSH client type: external
	I0923 12:58:45.980803  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa (-rw-------)
	I0923 12:58:45.980837  682373 main.go:141] libmachine: (ha-097312-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 12:58:45.980856  682373 main.go:141] libmachine: (ha-097312-m03) DBG | About to run SSH command:
	I0923 12:58:45.980901  682373 main.go:141] libmachine: (ha-097312-m03) DBG | exit 0
	I0923 12:58:45.984924  682373 main.go:141] libmachine: (ha-097312-m03) DBG | SSH cmd err, output: exit status 255: 
	I0923 12:58:45.984953  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0923 12:58:45.984969  682373 main.go:141] libmachine: (ha-097312-m03) DBG | command : exit 0
	I0923 12:58:45.984980  682373 main.go:141] libmachine: (ha-097312-m03) DBG | err     : exit status 255
	I0923 12:58:45.984992  682373 main.go:141] libmachine: (ha-097312-m03) DBG | output  : 
	I0923 12:58:48.985305  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Getting to WaitForSSH function...
	I0923 12:58:48.988493  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:48.989086  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:48.989132  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:48.989359  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Using SSH client type: external
	I0923 12:58:48.989374  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa (-rw-------)
	I0923 12:58:48.989402  682373 main.go:141] libmachine: (ha-097312-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 12:58:48.989422  682373 main.go:141] libmachine: (ha-097312-m03) DBG | About to run SSH command:
	I0923 12:58:48.989477  682373 main.go:141] libmachine: (ha-097312-m03) DBG | exit 0
	I0923 12:58:49.118512  682373 main.go:141] libmachine: (ha-097312-m03) DBG | SSH cmd err, output: <nil>: 
	I0923 12:58:49.118822  682373 main.go:141] libmachine: (ha-097312-m03) KVM machine creation complete!
	I0923 12:58:49.119172  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetConfigRaw
	I0923 12:58:49.119782  682373 main.go:141] libmachine: (ha-097312-m03) Calling .DriverName
	I0923 12:58:49.119996  682373 main.go:141] libmachine: (ha-097312-m03) Calling .DriverName
	I0923 12:58:49.120225  682373 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 12:58:49.120260  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetState
	I0923 12:58:49.121499  682373 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 12:58:49.121514  682373 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 12:58:49.121519  682373 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 12:58:49.121524  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:49.124296  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.124870  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:49.124900  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.125084  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:49.125266  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:49.125423  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:49.125561  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:49.125760  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:58:49.126112  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0923 12:58:49.126128  682373 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 12:58:49.237975  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:58:49.238009  682373 main.go:141] libmachine: Detecting the provisioner...
	I0923 12:58:49.238020  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:49.241019  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.241453  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:49.241483  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.241651  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:49.241948  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:49.242157  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:49.242344  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:49.242559  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:58:49.242800  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0923 12:58:49.242816  682373 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 12:58:49.358902  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 12:58:49.358998  682373 main.go:141] libmachine: found compatible host: buildroot
	I0923 12:58:49.359008  682373 main.go:141] libmachine: Provisioning with buildroot...
	I0923 12:58:49.359016  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetMachineName
	I0923 12:58:49.359321  682373 buildroot.go:166] provisioning hostname "ha-097312-m03"
	I0923 12:58:49.359351  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetMachineName
	I0923 12:58:49.359578  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:49.362575  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.363012  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:49.363043  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.363307  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:49.363499  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:49.363671  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:49.363837  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:49.363993  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:58:49.364183  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0923 12:58:49.364200  682373 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-097312-m03 && echo "ha-097312-m03" | sudo tee /etc/hostname
	I0923 12:58:49.489492  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-097312-m03
	
	I0923 12:58:49.489526  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:49.492826  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.493233  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:49.493269  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.493628  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:49.493912  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:49.494119  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:49.494303  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:49.494519  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:58:49.494751  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0923 12:58:49.494771  682373 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-097312-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-097312-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-097312-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:58:49.623370  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:58:49.623402  682373 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19690-662205/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-662205/.minikube}
	I0923 12:58:49.623425  682373 buildroot.go:174] setting up certificates
	I0923 12:58:49.623436  682373 provision.go:84] configureAuth start
	I0923 12:58:49.623450  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetMachineName
	I0923 12:58:49.623804  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetIP
	I0923 12:58:49.626789  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.627251  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:49.627282  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.627473  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:49.630844  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.631265  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:49.631296  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.631526  682373 provision.go:143] copyHostCerts
	I0923 12:58:49.631561  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 12:58:49.631598  682373 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem, removing ...
	I0923 12:58:49.631607  682373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 12:58:49.631691  682373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem (1082 bytes)
	I0923 12:58:49.631792  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 12:58:49.631821  682373 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem, removing ...
	I0923 12:58:49.631827  682373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 12:58:49.631868  682373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem (1123 bytes)
	I0923 12:58:49.631937  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 12:58:49.631962  682373 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem, removing ...
	I0923 12:58:49.631969  682373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 12:58:49.632010  682373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem (1675 bytes)
	I0923 12:58:49.632096  682373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem org=jenkins.ha-097312-m03 san=[127.0.0.1 192.168.39.174 ha-097312-m03 localhost minikube]
	I0923 12:58:49.828110  682373 provision.go:177] copyRemoteCerts
	I0923 12:58:49.828198  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:58:49.828227  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:49.830911  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.831302  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:49.831336  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:49.831594  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:49.831831  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:49.832077  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:49.832238  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa Username:docker}
	I0923 12:58:49.921694  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 12:58:49.921777  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 12:58:49.946275  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 12:58:49.946377  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 12:58:49.972209  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 12:58:49.972329  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 12:58:49.998142  682373 provision.go:87] duration metric: took 374.691465ms to configureAuth
	I0923 12:58:49.998176  682373 buildroot.go:189] setting minikube options for container-runtime
	I0923 12:58:49.998394  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:58:49.998468  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:50.001457  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.001907  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:50.002003  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.002101  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:50.002332  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:50.002519  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:50.002830  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:50.003058  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:58:50.003274  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0923 12:58:50.003290  682373 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 12:58:50.239197  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 12:58:50.239229  682373 main.go:141] libmachine: Checking connection to Docker...
	I0923 12:58:50.239238  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetURL
	I0923 12:58:50.240570  682373 main.go:141] libmachine: (ha-097312-m03) DBG | Using libvirt version 6000000
	I0923 12:58:50.243373  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.243723  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:50.243750  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.243998  682373 main.go:141] libmachine: Docker is up and running!
	I0923 12:58:50.244012  682373 main.go:141] libmachine: Reticulating splines...
	I0923 12:58:50.244021  682373 client.go:171] duration metric: took 27.457166675s to LocalClient.Create
	I0923 12:58:50.244048  682373 start.go:167] duration metric: took 27.457253634s to libmachine.API.Create "ha-097312"
	I0923 12:58:50.244058  682373 start.go:293] postStartSetup for "ha-097312-m03" (driver="kvm2")
	I0923 12:58:50.244067  682373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:58:50.244084  682373 main.go:141] libmachine: (ha-097312-m03) Calling .DriverName
	I0923 12:58:50.244341  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:58:50.244373  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:50.247177  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.247500  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:50.247521  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.247754  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:50.247951  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:50.248097  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:50.248197  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa Username:docker}
	I0923 12:58:50.333384  682373 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:58:50.338046  682373 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 12:58:50.338080  682373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/addons for local assets ...
	I0923 12:58:50.338170  682373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/files for local assets ...
	I0923 12:58:50.338267  682373 filesync.go:149] local asset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> 6694472.pem in /etc/ssl/certs
	I0923 12:58:50.338282  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> /etc/ssl/certs/6694472.pem
	I0923 12:58:50.338392  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 12:58:50.348354  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 12:58:50.372707  682373 start.go:296] duration metric: took 128.633991ms for postStartSetup
	I0923 12:58:50.372762  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetConfigRaw
	I0923 12:58:50.373426  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetIP
	I0923 12:58:50.376697  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.377173  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:50.377211  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.377593  682373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 12:58:50.377873  682373 start.go:128] duration metric: took 27.609858816s to createHost
	I0923 12:58:50.377907  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:50.380411  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.380907  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:50.380940  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.381160  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:50.381382  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:50.381590  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:50.381776  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:50.381976  682373 main.go:141] libmachine: Using SSH client type: native
	I0923 12:58:50.382153  682373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0923 12:58:50.382163  682373 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 12:58:50.503140  682373 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727096330.482204055
	
	I0923 12:58:50.503171  682373 fix.go:216] guest clock: 1727096330.482204055
	I0923 12:58:50.503182  682373 fix.go:229] Guest: 2024-09-23 12:58:50.482204055 +0000 UTC Remote: 2024-09-23 12:58:50.377890431 +0000 UTC m=+148.586385508 (delta=104.313624ms)
	I0923 12:58:50.503201  682373 fix.go:200] guest clock delta is within tolerance: 104.313624ms
	I0923 12:58:50.503207  682373 start.go:83] releasing machines lock for "ha-097312-m03", held for 27.735369252s
	I0923 12:58:50.503226  682373 main.go:141] libmachine: (ha-097312-m03) Calling .DriverName
	I0923 12:58:50.503498  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetIP
	I0923 12:58:50.506212  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.506688  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:50.506716  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.509222  682373 out.go:177] * Found network options:
	I0923 12:58:50.511101  682373 out.go:177]   - NO_PROXY=192.168.39.160,192.168.39.214
	W0923 12:58:50.512787  682373 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 12:58:50.512820  682373 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 12:58:50.512843  682373 main.go:141] libmachine: (ha-097312-m03) Calling .DriverName
	I0923 12:58:50.513731  682373 main.go:141] libmachine: (ha-097312-m03) Calling .DriverName
	I0923 12:58:50.513996  682373 main.go:141] libmachine: (ha-097312-m03) Calling .DriverName
	I0923 12:58:50.514102  682373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 12:58:50.514157  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	W0923 12:58:50.514279  682373 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 12:58:50.514318  682373 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 12:58:50.514393  682373 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 12:58:50.514415  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 12:58:50.517470  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.517502  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.517875  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:50.517907  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.517943  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:50.517962  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:50.518097  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:50.518178  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 12:58:50.518290  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:50.518373  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 12:58:50.518440  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:50.518566  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa Username:docker}
	I0923 12:58:50.518640  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 12:58:50.518802  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa Username:docker}
	I0923 12:58:50.765065  682373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 12:58:50.770910  682373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 12:58:50.770996  682373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 12:58:50.788872  682373 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 12:58:50.788920  682373 start.go:495] detecting cgroup driver to use...
	I0923 12:58:50.790888  682373 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 12:58:50.809431  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:58:50.825038  682373 docker.go:217] disabling cri-docker service (if available) ...
	I0923 12:58:50.825112  682373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 12:58:50.839523  682373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 12:58:50.854328  682373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 12:58:50.973330  682373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 12:58:51.114738  682373 docker.go:233] disabling docker service ...
	I0923 12:58:51.114816  682373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 12:58:51.129713  682373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 12:58:51.142863  682373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 12:58:51.295068  682373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 12:58:51.429699  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 12:58:51.445916  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:58:51.465380  682373 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 12:58:51.465444  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:58:51.476939  682373 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 12:58:51.477023  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:58:51.489669  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:58:51.501133  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:58:51.512757  682373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 12:58:51.524127  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:58:51.535054  682373 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:58:51.553239  682373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 12:58:51.565038  682373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 12:58:51.575598  682373 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 12:58:51.575670  682373 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 12:58:51.590718  682373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 12:58:51.601615  682373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:58:51.733836  682373 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 12:58:51.836194  682373 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 12:58:51.836276  682373 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 12:58:51.841212  682373 start.go:563] Will wait 60s for crictl version
	I0923 12:58:51.841301  682373 ssh_runner.go:195] Run: which crictl
	I0923 12:58:51.845296  682373 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 12:58:51.885994  682373 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 12:58:51.886074  682373 ssh_runner.go:195] Run: crio --version
	I0923 12:58:51.916461  682373 ssh_runner.go:195] Run: crio --version
	I0923 12:58:51.949216  682373 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 12:58:51.950816  682373 out.go:177]   - env NO_PROXY=192.168.39.160
	I0923 12:58:51.952396  682373 out.go:177]   - env NO_PROXY=192.168.39.160,192.168.39.214
	I0923 12:58:51.953858  682373 main.go:141] libmachine: (ha-097312-m03) Calling .GetIP
	I0923 12:58:51.957017  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:51.957485  682373 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 12:58:51.957528  682373 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 12:58:51.957807  682373 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 12:58:51.962319  682373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:58:51.975129  682373 mustload.go:65] Loading cluster: ha-097312
	I0923 12:58:51.975422  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:58:51.975727  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:58:51.975781  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:58:51.992675  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37443
	I0923 12:58:51.993145  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:58:51.993728  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:58:51.993763  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:58:51.994191  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:58:51.994434  682373 main.go:141] libmachine: (ha-097312) Calling .GetState
	I0923 12:58:51.996127  682373 host.go:66] Checking if "ha-097312" exists ...
	I0923 12:58:51.996593  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:58:51.996642  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:58:52.013141  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39117
	I0923 12:58:52.013710  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:58:52.014272  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:58:52.014297  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:58:52.014717  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:58:52.014958  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:58:52.015174  682373 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312 for IP: 192.168.39.174
	I0923 12:58:52.015189  682373 certs.go:194] generating shared ca certs ...
	I0923 12:58:52.015209  682373 certs.go:226] acquiring lock for ca certs: {Name:mk5f47b34d40554f07f6507fea971236e4735d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:58:52.015353  682373 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key
	I0923 12:58:52.015390  682373 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key
	I0923 12:58:52.015406  682373 certs.go:256] generating profile certs ...
	I0923 12:58:52.015485  682373 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.key
	I0923 12:58:52.015512  682373 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.480c46ec
	I0923 12:58:52.015531  682373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.480c46ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.160 192.168.39.214 192.168.39.174 192.168.39.254]
	I0923 12:58:52.141850  682373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.480c46ec ...
	I0923 12:58:52.141895  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.480c46ec: {Name:mkad80d48481e741ac2c369b88d81a886d1377dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:58:52.142113  682373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.480c46ec ...
	I0923 12:58:52.142128  682373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.480c46ec: {Name:mkc4802b23ce391f6bffaeddf1263168cc10992d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:58:52.142267  682373 certs.go:381] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.480c46ec -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt
	I0923 12:58:52.142420  682373 certs.go:385] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.480c46ec -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key
	I0923 12:58:52.142572  682373 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key
	I0923 12:58:52.142590  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 12:58:52.142609  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 12:58:52.142626  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 12:58:52.142641  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 12:58:52.142657  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 12:58:52.142672  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 12:58:52.142686  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 12:58:52.162055  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 12:58:52.162175  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem (1338 bytes)
	W0923 12:58:52.162222  682373 certs.go:480] ignoring /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447_empty.pem, impossibly tiny 0 bytes
	I0923 12:58:52.162262  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 12:58:52.162301  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem (1082 bytes)
	I0923 12:58:52.162335  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem (1123 bytes)
	I0923 12:58:52.162366  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem (1675 bytes)
	I0923 12:58:52.162425  682373 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 12:58:52.162463  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:58:52.162486  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem -> /usr/share/ca-certificates/669447.pem
	I0923 12:58:52.162507  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> /usr/share/ca-certificates/6694472.pem
	I0923 12:58:52.162554  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:58:52.165353  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:58:52.165846  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:58:52.165879  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:58:52.166095  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:58:52.166330  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:58:52.166495  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:58:52.166657  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:58:52.246349  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0923 12:58:52.251941  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0923 12:58:52.264760  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0923 12:58:52.269374  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0923 12:58:52.280997  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0923 12:58:52.286014  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0923 12:58:52.298212  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0923 12:58:52.302755  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0923 12:58:52.314763  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0923 12:58:52.319431  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0923 12:58:52.330709  682373 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0923 12:58:52.335071  682373 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1671 bytes)
	I0923 12:58:52.347748  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 12:58:52.374394  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 12:58:52.402200  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 12:58:52.428792  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 12:58:52.453080  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0923 12:58:52.477297  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 12:58:52.502367  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 12:58:52.527508  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 12:58:52.552924  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 12:58:52.577615  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem --> /usr/share/ca-certificates/669447.pem (1338 bytes)
	I0923 12:58:52.602992  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /usr/share/ca-certificates/6694472.pem (1708 bytes)
	I0923 12:58:52.628751  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0923 12:58:52.648794  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0923 12:58:52.665863  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0923 12:58:52.683590  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0923 12:58:52.703077  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0923 12:58:52.721135  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1671 bytes)
	I0923 12:58:52.738608  682373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0923 12:58:52.756580  682373 ssh_runner.go:195] Run: openssl version
	I0923 12:58:52.762277  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 12:58:52.773072  682373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:58:52.778133  682373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 12:28 /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:58:52.778215  682373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:58:52.784053  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 12:58:52.795445  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669447.pem && ln -fs /usr/share/ca-certificates/669447.pem /etc/ssl/certs/669447.pem"
	I0923 12:58:52.806223  682373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669447.pem
	I0923 12:58:52.811080  682373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 12:47 /usr/share/ca-certificates/669447.pem
	I0923 12:58:52.811155  682373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669447.pem
	I0923 12:58:52.817004  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/669447.pem /etc/ssl/certs/51391683.0"
	I0923 12:58:52.828392  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6694472.pem && ln -fs /usr/share/ca-certificates/6694472.pem /etc/ssl/certs/6694472.pem"
	I0923 12:58:52.839455  682373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6694472.pem
	I0923 12:58:52.844434  682373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 12:47 /usr/share/ca-certificates/6694472.pem
	I0923 12:58:52.844501  682373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6694472.pem
	I0923 12:58:52.850419  682373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6694472.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 12:58:52.861972  682373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 12:58:52.866305  682373 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 12:58:52.866361  682373 kubeadm.go:934] updating node {m03 192.168.39.174 8443 v1.31.1 crio true true} ...
	I0923 12:58:52.866458  682373 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-097312-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 12:58:52.866484  682373 kube-vip.go:115] generating kube-vip config ...
	I0923 12:58:52.866520  682373 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 12:58:52.883666  682373 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 12:58:52.883745  682373 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0923 12:58:52.883809  682373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 12:58:52.895283  682373 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0923 12:58:52.895366  682373 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0923 12:58:52.905663  682373 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0923 12:58:52.905685  682373 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0923 12:58:52.905697  682373 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0923 12:58:52.905721  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 12:58:52.905750  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:58:52.905775  682373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 12:58:52.905694  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 12:58:52.905887  682373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 12:58:52.923501  682373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 12:58:52.923608  682373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 12:58:52.923612  682373 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0923 12:58:52.923649  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0923 12:58:52.923698  682373 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0923 12:58:52.923733  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0923 12:58:52.956744  682373 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0923 12:58:52.956812  682373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0923 12:58:54.045786  682373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0923 12:58:54.057369  682373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0923 12:58:54.076949  682373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 12:58:54.094827  682373 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0923 12:58:54.111645  682373 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0923 12:58:54.115795  682373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:58:54.129074  682373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:58:54.273605  682373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:58:54.295098  682373 host.go:66] Checking if "ha-097312" exists ...
	I0923 12:58:54.295704  682373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:58:54.295775  682373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:58:54.312297  682373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32969
	I0923 12:58:54.312791  682373 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:58:54.313333  682373 main.go:141] libmachine: Using API Version  1
	I0923 12:58:54.313355  682373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:58:54.313727  682373 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:58:54.314023  682373 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 12:58:54.314202  682373 start.go:317] joinCluster: &{Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:58:54.314373  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0923 12:58:54.314400  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 12:58:54.318048  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:58:54.318537  682373 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 12:58:54.318569  682373 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 12:58:54.318697  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 12:58:54.319009  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 12:58:54.319229  682373 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 12:58:54.319353  682373 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 12:58:54.524084  682373 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:58:54.524132  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ll3mfm.tdumzjzob0cezji3 --discovery-token-ca-cert-hash sha256:3fc29dc81bde6bbaef9ddbc91342eaa216189e2d814cc53e215aada75bebb1ff --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-097312-m03 --control-plane --apiserver-advertise-address=192.168.39.174 --apiserver-bind-port=8443"
	I0923 12:59:17.735394  682373 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ll3mfm.tdumzjzob0cezji3 --discovery-token-ca-cert-hash sha256:3fc29dc81bde6bbaef9ddbc91342eaa216189e2d814cc53e215aada75bebb1ff --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-097312-m03 --control-plane --apiserver-advertise-address=192.168.39.174 --apiserver-bind-port=8443": (23.211225253s)
	I0923 12:59:17.735437  682373 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0923 12:59:18.305608  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-097312-m03 minikube.k8s.io/updated_at=2024_09_23T12_59_18_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=ha-097312 minikube.k8s.io/primary=false
	I0923 12:59:18.439539  682373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-097312-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0923 12:59:18.578555  682373 start.go:319] duration metric: took 24.264347271s to joinCluster
	I0923 12:59:18.578645  682373 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 12:59:18.578956  682373 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:59:18.580466  682373 out.go:177] * Verifying Kubernetes components...
	I0923 12:59:18.581761  682373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:59:18.828388  682373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:59:18.856001  682373 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 12:59:18.856284  682373 kapi.go:59] client config for ha-097312: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.crt", KeyFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.key", CAFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0923 12:59:18.856351  682373 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.160:8443
	I0923 12:59:18.856639  682373 node_ready.go:35] waiting up to 6m0s for node "ha-097312-m03" to be "Ready" ...
	I0923 12:59:18.856738  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:18.856749  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:18.856757  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:18.856766  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:18.860204  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:19.357957  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:19.357992  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:19.358007  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:19.358015  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:19.361736  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:19.857898  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:19.857930  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:19.857938  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:19.857944  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:19.862012  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:20.356893  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:20.356921  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:20.356930  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:20.356934  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:20.363054  682373 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:59:20.857559  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:20.857592  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:20.857605  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:20.857610  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:20.861005  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:20.862362  682373 node_ready.go:53] node "ha-097312-m03" has status "Ready":"False"
	I0923 12:59:21.357690  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:21.357715  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:21.357724  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:21.357728  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:21.361111  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:21.857622  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:21.857650  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:21.857662  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:21.857666  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:21.861308  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:22.357805  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:22.357838  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:22.357852  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:22.357857  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:22.362010  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:22.856839  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:22.856862  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:22.856870  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:22.856876  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:22.860508  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:23.356920  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:23.356945  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:23.356954  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:23.356958  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:23.361117  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:23.361903  682373 node_ready.go:53] node "ha-097312-m03" has status "Ready":"False"
	I0923 12:59:23.857041  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:23.857068  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:23.857080  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:23.857085  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:23.860533  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:24.357315  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:24.357339  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:24.357347  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:24.357351  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:24.361517  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:24.857855  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:24.857884  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:24.857895  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:24.857900  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:24.861499  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:25.357580  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:25.357619  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:25.357634  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:25.357642  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:25.361466  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:25.362062  682373 node_ready.go:53] node "ha-097312-m03" has status "Ready":"False"
	I0923 12:59:25.856889  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:25.856972  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:25.856988  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:25.856995  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:25.864725  682373 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:59:26.357753  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:26.357775  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:26.357783  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:26.357788  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:26.361700  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:26.857569  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:26.857596  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:26.857606  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:26.857610  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:26.861224  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:27.357961  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:27.357993  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:27.358004  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:27.358010  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:27.361578  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:27.362220  682373 node_ready.go:53] node "ha-097312-m03" has status "Ready":"False"
	I0923 12:59:27.857445  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:27.857476  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:27.857488  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:27.857492  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:27.860961  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:28.356947  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:28.356973  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:28.356982  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:28.356986  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:28.360616  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:28.857670  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:28.857696  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:28.857705  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:28.857709  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:28.861424  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:29.357678  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:29.357701  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:29.357710  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:29.357715  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:29.361197  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:29.857149  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:29.857176  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:29.857184  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:29.857190  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:29.861121  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:29.862064  682373 node_ready.go:53] node "ha-097312-m03" has status "Ready":"False"
	I0923 12:59:30.357260  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:30.357288  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:30.357300  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:30.357308  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:30.360825  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:30.857554  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:30.857588  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:30.857601  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:30.857607  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:30.862056  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:31.357693  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:31.357719  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:31.357729  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:31.357745  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:31.361364  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:31.857735  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:31.857763  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:31.857772  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:31.857777  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:31.861563  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:31.862191  682373 node_ready.go:53] node "ha-097312-m03" has status "Ready":"False"
	I0923 12:59:32.357163  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:32.357191  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:32.357201  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:32.357207  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:32.360747  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:32.857730  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:32.857757  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:32.857766  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:32.857770  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:32.861363  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:33.357472  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:33.357507  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:33.357516  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:33.357521  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:33.361140  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:33.857033  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:33.857060  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:33.857069  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:33.857073  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:33.860438  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:34.357801  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:34.357841  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:34.357852  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:34.357857  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:34.361712  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:34.362366  682373 node_ready.go:53] node "ha-097312-m03" has status "Ready":"False"
	I0923 12:59:34.857887  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:34.857914  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:34.857924  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:34.857929  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:34.861889  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:35.357641  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:35.357673  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:35.357745  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:35.357754  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:35.362328  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:35.856847  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:35.856871  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:35.856879  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:35.856884  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:35.860452  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:36.357570  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:36.357596  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.357604  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.357608  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.360898  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:36.361411  682373 node_ready.go:49] node "ha-097312-m03" has status "Ready":"True"
	I0923 12:59:36.361434  682373 node_ready.go:38] duration metric: took 17.504775714s for node "ha-097312-m03" to be "Ready" ...
	I0923 12:59:36.361446  682373 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:59:36.361531  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:59:36.361549  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.361557  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.361564  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.367567  682373 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:59:36.374612  682373 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6g9x2" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.374726  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6g9x2
	I0923 12:59:36.374738  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.374750  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.374756  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.377869  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:36.378692  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:36.378712  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.378724  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.378729  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.381742  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:59:36.382472  682373 pod_ready.go:93] pod "coredns-7c65d6cfc9-6g9x2" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:36.382491  682373 pod_ready.go:82] duration metric: took 7.850172ms for pod "coredns-7c65d6cfc9-6g9x2" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.382500  682373 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-txcxz" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.382562  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-txcxz
	I0923 12:59:36.382569  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.382577  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.382582  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.385403  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:59:36.386115  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:36.386131  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.386138  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.386142  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.388676  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:59:36.389107  682373 pod_ready.go:93] pod "coredns-7c65d6cfc9-txcxz" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:36.389124  682373 pod_ready.go:82] duration metric: took 6.617983ms for pod "coredns-7c65d6cfc9-txcxz" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.389133  682373 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.389188  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/etcd-ha-097312
	I0923 12:59:36.389195  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.389202  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.389208  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.391701  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:59:36.392175  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:36.392190  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.392198  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.392201  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.394837  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:59:36.395206  682373 pod_ready.go:93] pod "etcd-ha-097312" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:36.395227  682373 pod_ready.go:82] duration metric: took 6.08706ms for pod "etcd-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.395247  682373 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.395320  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/etcd-ha-097312-m02
	I0923 12:59:36.395330  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.395337  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.395340  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.398083  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:59:36.398586  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:59:36.398601  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.398608  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.398611  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.401154  682373 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:59:36.401531  682373 pod_ready.go:93] pod "etcd-ha-097312-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:36.401548  682373 pod_ready.go:82] duration metric: took 6.293178ms for pod "etcd-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.401558  682373 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-097312-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.557912  682373 request.go:632] Waited for 156.279648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/etcd-ha-097312-m03
	I0923 12:59:36.558018  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/etcd-ha-097312-m03
	I0923 12:59:36.558029  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.558039  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.558047  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.561558  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:36.757644  682373 request.go:632] Waited for 194.999965ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:36.757715  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:36.757723  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.757735  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.757740  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.761054  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:36.761940  682373 pod_ready.go:93] pod "etcd-ha-097312-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:36.761961  682373 pod_ready.go:82] duration metric: took 360.394832ms for pod "etcd-ha-097312-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.761980  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:36.958288  682373 request.go:632] Waited for 196.158494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312
	I0923 12:59:36.958372  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312
	I0923 12:59:36.958380  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:36.958392  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:36.958398  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:36.962196  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:37.157878  682373 request.go:632] Waited for 194.88858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:37.157969  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:37.157982  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:37.157994  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:37.158002  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:37.161325  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:37.162218  682373 pod_ready.go:93] pod "kube-apiserver-ha-097312" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:37.162262  682373 pod_ready.go:82] duration metric: took 400.255775ms for pod "kube-apiserver-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:37.162271  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:37.358381  682373 request.go:632] Waited for 196.017645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312-m02
	I0923 12:59:37.358481  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312-m02
	I0923 12:59:37.358490  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:37.358512  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:37.358538  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:37.362068  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:37.558164  682373 request.go:632] Waited for 195.3848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:59:37.558235  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:59:37.558245  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:37.558256  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:37.558264  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:37.563780  682373 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:59:37.564272  682373 pod_ready.go:93] pod "kube-apiserver-ha-097312-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:37.564295  682373 pod_ready.go:82] duration metric: took 402.016943ms for pod "kube-apiserver-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:37.564305  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-097312-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:37.757786  682373 request.go:632] Waited for 193.39104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312-m03
	I0923 12:59:37.757874  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-097312-m03
	I0923 12:59:37.757881  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:37.757890  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:37.757897  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:37.762281  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:37.958642  682373 request.go:632] Waited for 195.351711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:37.958724  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:37.958731  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:37.958741  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:37.958751  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:37.963464  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:37.964071  682373 pod_ready.go:93] pod "kube-apiserver-ha-097312-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:37.964093  682373 pod_ready.go:82] duration metric: took 399.781684ms for pod "kube-apiserver-ha-097312-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:37.964104  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:38.158303  682373 request.go:632] Waited for 194.104315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312
	I0923 12:59:38.158371  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312
	I0923 12:59:38.158377  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:38.158385  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:38.158391  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:38.161516  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:38.358608  682373 request.go:632] Waited for 196.37901ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:38.358678  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:38.358683  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:38.358693  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:38.358707  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:38.362309  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:38.362758  682373 pod_ready.go:93] pod "kube-controller-manager-ha-097312" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:38.362779  682373 pod_ready.go:82] duration metric: took 398.667788ms for pod "kube-controller-manager-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:38.362790  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:38.557916  682373 request.go:632] Waited for 195.037752ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312-m02
	I0923 12:59:38.558039  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312-m02
	I0923 12:59:38.558049  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:38.558057  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:38.558064  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:38.563352  682373 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:59:38.758557  682373 request.go:632] Waited for 194.402691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:59:38.758625  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:59:38.758630  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:38.758637  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:38.758647  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:38.763501  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:38.764092  682373 pod_ready.go:93] pod "kube-controller-manager-ha-097312-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:38.764116  682373 pod_ready.go:82] duration metric: took 401.316143ms for pod "kube-controller-manager-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:38.764127  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-097312-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:38.958205  682373 request.go:632] Waited for 193.95149ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312-m03
	I0923 12:59:38.958318  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-097312-m03
	I0923 12:59:38.958330  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:38.958341  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:38.958349  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:38.962605  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:39.158615  682373 request.go:632] Waited for 195.29247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:39.158699  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:39.158709  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:39.158718  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:39.158721  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:39.162027  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:39.162535  682373 pod_ready.go:93] pod "kube-controller-manager-ha-097312-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:39.162561  682373 pod_ready.go:82] duration metric: took 398.425721ms for pod "kube-controller-manager-ha-097312-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:39.162572  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-drj8m" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:39.358164  682373 request.go:632] Waited for 195.510394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-drj8m
	I0923 12:59:39.358250  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-drj8m
	I0923 12:59:39.358257  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:39.358268  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:39.358277  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:39.361850  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:39.558199  682373 request.go:632] Waited for 195.364547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:39.558282  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:39.558297  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:39.558307  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:39.558313  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:39.561590  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:39.562130  682373 pod_ready.go:93] pod "kube-proxy-drj8m" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:39.562153  682373 pod_ready.go:82] duration metric: took 399.573676ms for pod "kube-proxy-drj8m" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:39.562166  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vs524" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:39.758184  682373 request.go:632] Waited for 195.937914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vs524
	I0923 12:59:39.758247  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vs524
	I0923 12:59:39.758252  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:39.758259  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:39.758265  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:39.761790  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:39.957921  682373 request.go:632] Waited for 195.366189ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:39.957991  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:39.958005  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:39.958013  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:39.958019  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:39.962060  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:39.962614  682373 pod_ready.go:93] pod "kube-proxy-vs524" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:39.962646  682373 pod_ready.go:82] duration metric: took 400.470478ms for pod "kube-proxy-vs524" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:39.962661  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z6ss5" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:40.158575  682373 request.go:632] Waited for 195.810945ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z6ss5
	I0923 12:59:40.158664  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z6ss5
	I0923 12:59:40.158676  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:40.158687  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:40.158696  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:40.161968  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:40.358036  682373 request.go:632] Waited for 195.378024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:59:40.358107  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:59:40.358112  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:40.358120  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:40.358124  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:40.361928  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:40.362451  682373 pod_ready.go:93] pod "kube-proxy-z6ss5" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:40.362474  682373 pod_ready.go:82] duration metric: took 399.805025ms for pod "kube-proxy-z6ss5" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:40.362484  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:40.558528  682373 request.go:632] Waited for 195.950146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312
	I0923 12:59:40.558598  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312
	I0923 12:59:40.558612  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:40.558621  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:40.558625  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:40.562266  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:40.758487  682373 request.go:632] Waited for 195.542399ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:40.758572  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312
	I0923 12:59:40.758580  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:40.758591  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:40.758597  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:40.761825  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:40.762402  682373 pod_ready.go:93] pod "kube-scheduler-ha-097312" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:40.762425  682373 pod_ready.go:82] duration metric: took 399.935026ms for pod "kube-scheduler-ha-097312" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:40.762434  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:40.958691  682373 request.go:632] Waited for 196.142693ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312-m02
	I0923 12:59:40.958767  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312-m02
	I0923 12:59:40.958774  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:40.958782  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:40.958789  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:40.962833  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:41.157936  682373 request.go:632] Waited for 194.384412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:59:41.158022  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m02
	I0923 12:59:41.158027  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:41.158035  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:41.158040  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:41.161682  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:41.162279  682373 pod_ready.go:93] pod "kube-scheduler-ha-097312-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:41.162303  682373 pod_ready.go:82] duration metric: took 399.860916ms for pod "kube-scheduler-ha-097312-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:41.162316  682373 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-097312-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:41.358427  682373 request.go:632] Waited for 196.013005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312-m03
	I0923 12:59:41.358521  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-097312-m03
	I0923 12:59:41.358530  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:41.358541  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:41.358548  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:41.362666  682373 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:59:41.557722  682373 request.go:632] Waited for 194.306447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:41.557785  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes/ha-097312-m03
	I0923 12:59:41.557790  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:41.557799  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:41.557805  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:41.561165  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:41.561618  682373 pod_ready.go:93] pod "kube-scheduler-ha-097312-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 12:59:41.561638  682373 pod_ready.go:82] duration metric: took 399.3114ms for pod "kube-scheduler-ha-097312-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:59:41.561649  682373 pod_ready.go:39] duration metric: took 5.200192468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:59:41.561668  682373 api_server.go:52] waiting for apiserver process to appear ...
	I0923 12:59:41.561726  682373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:59:41.578487  682373 api_server.go:72] duration metric: took 22.999797093s to wait for apiserver process to appear ...
	I0923 12:59:41.578520  682373 api_server.go:88] waiting for apiserver healthz status ...
	I0923 12:59:41.578549  682373 api_server.go:253] Checking apiserver healthz at https://192.168.39.160:8443/healthz ...
	I0923 12:59:41.583195  682373 api_server.go:279] https://192.168.39.160:8443/healthz returned 200:
	ok
	I0923 12:59:41.583283  682373 round_trippers.go:463] GET https://192.168.39.160:8443/version
	I0923 12:59:41.583292  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:41.583300  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:41.583303  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:41.584184  682373 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0923 12:59:41.584348  682373 api_server.go:141] control plane version: v1.31.1
	I0923 12:59:41.584376  682373 api_server.go:131] duration metric: took 5.84872ms to wait for apiserver health ...
	I0923 12:59:41.584386  682373 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 12:59:41.757749  682373 request.go:632] Waited for 173.249304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:59:41.757819  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:59:41.757848  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:41.757861  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:41.757869  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:41.765026  682373 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:59:41.775103  682373 system_pods.go:59] 24 kube-system pods found
	I0923 12:59:41.775147  682373 system_pods.go:61] "coredns-7c65d6cfc9-6g9x2" [af485e47-0e78-483e-8f35-a7a4ab53f014] Running
	I0923 12:59:41.775153  682373 system_pods.go:61] "coredns-7c65d6cfc9-txcxz" [e6da5f25-f232-4649-9801-f3577210ea2e] Running
	I0923 12:59:41.775158  682373 system_pods.go:61] "etcd-ha-097312" [7f27c05d-176f-4397-8966-a2cc29556265] Running
	I0923 12:59:41.775162  682373 system_pods.go:61] "etcd-ha-097312-m02" [50d4b55f-31d3-4351-8574-506bbc4167d6] Running
	I0923 12:59:41.775166  682373 system_pods.go:61] "etcd-ha-097312-m03" [47812605-2ed5-49dc-acae-7b8ff115b1c5] Running
	I0923 12:59:41.775171  682373 system_pods.go:61] "kindnet-hcclj" [0e57c02a-6f9f-4829-9838-6bed660540a4] Running
	I0923 12:59:41.775176  682373 system_pods.go:61] "kindnet-j8l5t" [49216705-6e85-4b98-afbd-f4228b774321] Running
	I0923 12:59:41.775181  682373 system_pods.go:61] "kindnet-lcrdg" [fc7c4594-c83a-4254-a163-8f66b34c53c0] Running
	I0923 12:59:41.775186  682373 system_pods.go:61] "kube-apiserver-ha-097312" [4b8954a1-188a-4734-8e79-eace293c35e9] Running
	I0923 12:59:41.775191  682373 system_pods.go:61] "kube-apiserver-ha-097312-m02" [6022c193-400e-4641-8c4d-d24f0ce3e6ea] Running
	I0923 12:59:41.775195  682373 system_pods.go:61] "kube-apiserver-ha-097312-m03" [cfc94901-d0f5-4a59-a8d2-8841462a3166] Running
	I0923 12:59:41.775203  682373 system_pods.go:61] "kube-controller-manager-ha-097312" [c085db05-26f3-471b-baf1-f90cbfdacf19] Running
	I0923 12:59:41.775214  682373 system_pods.go:61] "kube-controller-manager-ha-097312-m02" [4cc903b8-c0c1-4ef7-9338-44af86be9280] Running
	I0923 12:59:41.775219  682373 system_pods.go:61] "kube-controller-manager-ha-097312-m03" [70886840-6967-4d3c-a0b7-e6448711e0cc] Running
	I0923 12:59:41.775224  682373 system_pods.go:61] "kube-proxy-drj8m" [a1c5535e-7139-441f-9065-ef7d147582d2] Running
	I0923 12:59:41.775249  682373 system_pods.go:61] "kube-proxy-vs524" [92738649-c52b-44d5-866b-8cda751a538c] Running
	I0923 12:59:41.775255  682373 system_pods.go:61] "kube-proxy-z6ss5" [7bff6204-a427-48c8-83a3-448ff1328b1b] Running
	I0923 12:59:41.775258  682373 system_pods.go:61] "kube-scheduler-ha-097312" [408ec8ae-eeca-4026-9582-45e7d209f09c] Running
	I0923 12:59:41.775264  682373 system_pods.go:61] "kube-scheduler-ha-097312-m02" [71e7793e-3d21-476a-84de-6fc84631e313] Running
	I0923 12:59:41.775268  682373 system_pods.go:61] "kube-scheduler-ha-097312-m03" [7811405d-6f57-440f-a9a2-178f2a094f61] Running
	I0923 12:59:41.775273  682373 system_pods.go:61] "kube-vip-ha-097312" [b26dfdf8-fa4b-4822-a88c-fe7af53be81b] Running
	I0923 12:59:41.775276  682373 system_pods.go:61] "kube-vip-ha-097312-m02" [910ae281-c533-4aa6-acb0-c1b69dddd842] Running
	I0923 12:59:41.775282  682373 system_pods.go:61] "kube-vip-ha-097312-m03" [1de093b7-e402-48af-ac83-09f59ffd213e] Running
	I0923 12:59:41.775287  682373 system_pods.go:61] "storage-provisioner" [0bbda806-091c-4e48-982a-296bbf03abd6] Running
	I0923 12:59:41.775297  682373 system_pods.go:74] duration metric: took 190.903005ms to wait for pod list to return data ...
	I0923 12:59:41.775310  682373 default_sa.go:34] waiting for default service account to be created ...
	I0923 12:59:41.957641  682373 request.go:632] Waited for 182.223415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/default/serviceaccounts
	I0923 12:59:41.957725  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/default/serviceaccounts
	I0923 12:59:41.957732  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:41.957741  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:41.957748  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:41.961638  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:41.961870  682373 default_sa.go:45] found service account: "default"
	I0923 12:59:41.961901  682373 default_sa.go:55] duration metric: took 186.579724ms for default service account to be created ...
	I0923 12:59:41.961914  682373 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 12:59:42.158106  682373 request.go:632] Waited for 196.090807ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:59:42.158184  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/namespaces/kube-system/pods
	I0923 12:59:42.158191  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:42.158202  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:42.158209  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:42.163268  682373 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:59:42.169516  682373 system_pods.go:86] 24 kube-system pods found
	I0923 12:59:42.169555  682373 system_pods.go:89] "coredns-7c65d6cfc9-6g9x2" [af485e47-0e78-483e-8f35-a7a4ab53f014] Running
	I0923 12:59:42.169562  682373 system_pods.go:89] "coredns-7c65d6cfc9-txcxz" [e6da5f25-f232-4649-9801-f3577210ea2e] Running
	I0923 12:59:42.169566  682373 system_pods.go:89] "etcd-ha-097312" [7f27c05d-176f-4397-8966-a2cc29556265] Running
	I0923 12:59:42.169570  682373 system_pods.go:89] "etcd-ha-097312-m02" [50d4b55f-31d3-4351-8574-506bbc4167d6] Running
	I0923 12:59:42.169574  682373 system_pods.go:89] "etcd-ha-097312-m03" [47812605-2ed5-49dc-acae-7b8ff115b1c5] Running
	I0923 12:59:42.169578  682373 system_pods.go:89] "kindnet-hcclj" [0e57c02a-6f9f-4829-9838-6bed660540a4] Running
	I0923 12:59:42.169582  682373 system_pods.go:89] "kindnet-j8l5t" [49216705-6e85-4b98-afbd-f4228b774321] Running
	I0923 12:59:42.169587  682373 system_pods.go:89] "kindnet-lcrdg" [fc7c4594-c83a-4254-a163-8f66b34c53c0] Running
	I0923 12:59:42.169596  682373 system_pods.go:89] "kube-apiserver-ha-097312" [4b8954a1-188a-4734-8e79-eace293c35e9] Running
	I0923 12:59:42.169603  682373 system_pods.go:89] "kube-apiserver-ha-097312-m02" [6022c193-400e-4641-8c4d-d24f0ce3e6ea] Running
	I0923 12:59:42.169609  682373 system_pods.go:89] "kube-apiserver-ha-097312-m03" [cfc94901-d0f5-4a59-a8d2-8841462a3166] Running
	I0923 12:59:42.169617  682373 system_pods.go:89] "kube-controller-manager-ha-097312" [c085db05-26f3-471b-baf1-f90cbfdacf19] Running
	I0923 12:59:42.169629  682373 system_pods.go:89] "kube-controller-manager-ha-097312-m02" [4cc903b8-c0c1-4ef7-9338-44af86be9280] Running
	I0923 12:59:42.169636  682373 system_pods.go:89] "kube-controller-manager-ha-097312-m03" [70886840-6967-4d3c-a0b7-e6448711e0cc] Running
	I0923 12:59:42.169643  682373 system_pods.go:89] "kube-proxy-drj8m" [a1c5535e-7139-441f-9065-ef7d147582d2] Running
	I0923 12:59:42.169653  682373 system_pods.go:89] "kube-proxy-vs524" [92738649-c52b-44d5-866b-8cda751a538c] Running
	I0923 12:59:42.169657  682373 system_pods.go:89] "kube-proxy-z6ss5" [7bff6204-a427-48c8-83a3-448ff1328b1b] Running
	I0923 12:59:42.169661  682373 system_pods.go:89] "kube-scheduler-ha-097312" [408ec8ae-eeca-4026-9582-45e7d209f09c] Running
	I0923 12:59:42.169665  682373 system_pods.go:89] "kube-scheduler-ha-097312-m02" [71e7793e-3d21-476a-84de-6fc84631e313] Running
	I0923 12:59:42.169669  682373 system_pods.go:89] "kube-scheduler-ha-097312-m03" [7811405d-6f57-440f-a9a2-178f2a094f61] Running
	I0923 12:59:42.169672  682373 system_pods.go:89] "kube-vip-ha-097312" [b26dfdf8-fa4b-4822-a88c-fe7af53be81b] Running
	I0923 12:59:42.169679  682373 system_pods.go:89] "kube-vip-ha-097312-m02" [910ae281-c533-4aa6-acb0-c1b69dddd842] Running
	I0923 12:59:42.169684  682373 system_pods.go:89] "kube-vip-ha-097312-m03" [1de093b7-e402-48af-ac83-09f59ffd213e] Running
	I0923 12:59:42.169687  682373 system_pods.go:89] "storage-provisioner" [0bbda806-091c-4e48-982a-296bbf03abd6] Running
	I0923 12:59:42.169694  682373 system_pods.go:126] duration metric: took 207.772669ms to wait for k8s-apps to be running ...
	I0923 12:59:42.169708  682373 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 12:59:42.169771  682373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:59:42.186008  682373 system_svc.go:56] duration metric: took 16.290747ms WaitForService to wait for kubelet
	I0923 12:59:42.186050  682373 kubeadm.go:582] duration metric: took 23.607368403s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:59:42.186083  682373 node_conditions.go:102] verifying NodePressure condition ...
	I0923 12:59:42.358541  682373 request.go:632] Waited for 172.350275ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.160:8443/api/v1/nodes
	I0923 12:59:42.358620  682373 round_trippers.go:463] GET https://192.168.39.160:8443/api/v1/nodes
	I0923 12:59:42.358625  682373 round_trippers.go:469] Request Headers:
	I0923 12:59:42.358634  682373 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:59:42.358638  682373 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:59:42.361922  682373 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:59:42.362876  682373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:59:42.362900  682373 node_conditions.go:123] node cpu capacity is 2
	I0923 12:59:42.362911  682373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:59:42.362914  682373 node_conditions.go:123] node cpu capacity is 2
	I0923 12:59:42.362918  682373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:59:42.362921  682373 node_conditions.go:123] node cpu capacity is 2
	I0923 12:59:42.362925  682373 node_conditions.go:105] duration metric: took 176.836519ms to run NodePressure ...
	I0923 12:59:42.362937  682373 start.go:241] waiting for startup goroutines ...
	I0923 12:59:42.362958  682373 start.go:255] writing updated cluster config ...
	I0923 12:59:42.363261  682373 ssh_runner.go:195] Run: rm -f paused
	I0923 12:59:42.417533  682373 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 12:59:42.419577  682373 out.go:177] * Done! kubectl is now configured to use "ha-097312" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 23 13:03:39 ha-097312 crio[666]: time="2024-09-23 13:03:39.074483557Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096619074462840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc8c0b21-00ca-45a9-8c92-7f8428fd4338 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:03:39 ha-097312 crio[666]: time="2024-09-23 13:03:39.074947678Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e377d75-e671-4a2a-ab11-9264f8034508 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:39 ha-097312 crio[666]: time="2024-09-23 13:03:39.074998771Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e377d75-e671-4a2a-ab11-9264f8034508 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:39 ha-097312 crio[666]: time="2024-09-23 13:03:39.075276309Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0c8b3d3e1c9604dd8d7d45c15c2a91a759a62f04a047e5626d57a757a396bd4b,PodSandboxId:01a99cef826dda6f2b65d379c041e96505aa2085b58dd4630a3ae2c0052d503b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727096387328810156,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070d45bce8ff98c35a7d8c06328c902bd260bbcd49c6d8b65acf5f2fe3670f05,PodSandboxId:287ae69fbba66da4b73f16d080fbf336ffcfc42104571090400deb8b10a0a4f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727096240448358828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6494b72ca963ec5a21179322ce5a1a3cd2ecf6063d12290ea8c06659ede25828,PodSandboxId:09f40d2b506132af296453dc4125d2ff70d789a87f1da351ae25a90c863e1c5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096240450387241,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cead05960724ef0a7c164689c7f077c5173bf75483e09a02ea44bf3b5dde8cab,PodSandboxId:d6346e81a93e3ab149256d0f37fd69af6c44f91e6e6662b3720a7bd343554d66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096240372155642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e
78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03670fd92c8a80c9d88e88b722428ce8ea7ed15a32a25c8c4c948685c15fe41c,PodSandboxId:fa074de98ab0bb7558595bb7900fab097f2fa4cf091ae0c9ed5fd5c899cc2044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17270962
28373682156,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b6ad938698e107c07b01a67dcc4f6f6f2895a6b2ddc7a269056adab117c0ce,PodSandboxId:8efd7c52e41eb6dd5b30df6dc0b133cb2ffabe08abf473da0e79edcf137bc745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727096228199432737,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5095373416a8e45324449515c2fa18882a4b643648236860681c27f7f589bdb,PodSandboxId:a65df228e8bfd8d4d6a9b85c6cbab162a4a128e8612cbb781b68b21b0f017fe2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727096218413186299,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 517a285369c2d468692e1e5ab2e508d6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfbdbe2c35f63b185f28992c717601392287e693216d7332cfd0b4b6597c8ad,PodSandboxId:46a49b5018b58cc60ab2c080f685d00c187e33e4c7790af775ed5baf71aefdca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727096215629421014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c9e8fb5e944bc800446956248067c039e5c452de2651adf100841c5f062a431,PodSandboxId:e4cdc1cb583f42c1cf64e136ebe20075107963fc13da9144c568b67897e7e8a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727096215612519576,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c28bf3f4d80d4048804c687d1cec38aff92ff01ac7556fbe59fd2c73324b333,PodSandboxId:66109e91b1f789d247a6b16e21533a1c912ebdf0386ca6f2b2a221f5a873a754,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727096215567156911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476ad705f89683694506883a4ac379c2339d6097875e3a88c66a078cec041492,PodSandboxId:d5fd7dbc75ab3b9c7a6cdfac29a7ad6d6d093ed1004322d9f8640bbfe66c5388,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727096215548571609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e377d75-e671-4a2a-ab11-9264f8034508 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:39 ha-097312 crio[666]: time="2024-09-23 13:03:39.111723003Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a658effe-79cc-41ca-90a4-255a762724a3 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:03:39 ha-097312 crio[666]: time="2024-09-23 13:03:39.111826646Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a658effe-79cc-41ca-90a4-255a762724a3 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:03:39 ha-097312 crio[666]: time="2024-09-23 13:03:39.113179259Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bcb2896a-e52b-43b5-b424-eaa9e02d0604 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:03:39 ha-097312 crio[666]: time="2024-09-23 13:03:39.113614332Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096619113591277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bcb2896a-e52b-43b5-b424-eaa9e02d0604 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:03:39 ha-097312 crio[666]: time="2024-09-23 13:03:39.114247093Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ee80cbf-65bd-4818-bbad-e32736b344f7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:39 ha-097312 crio[666]: time="2024-09-23 13:03:39.114324199Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ee80cbf-65bd-4818-bbad-e32736b344f7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:39 ha-097312 crio[666]: time="2024-09-23 13:03:39.115052449Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0c8b3d3e1c9604dd8d7d45c15c2a91a759a62f04a047e5626d57a757a396bd4b,PodSandboxId:01a99cef826dda6f2b65d379c041e96505aa2085b58dd4630a3ae2c0052d503b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727096387328810156,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070d45bce8ff98c35a7d8c06328c902bd260bbcd49c6d8b65acf5f2fe3670f05,PodSandboxId:287ae69fbba66da4b73f16d080fbf336ffcfc42104571090400deb8b10a0a4f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727096240448358828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6494b72ca963ec5a21179322ce5a1a3cd2ecf6063d12290ea8c06659ede25828,PodSandboxId:09f40d2b506132af296453dc4125d2ff70d789a87f1da351ae25a90c863e1c5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096240450387241,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cead05960724ef0a7c164689c7f077c5173bf75483e09a02ea44bf3b5dde8cab,PodSandboxId:d6346e81a93e3ab149256d0f37fd69af6c44f91e6e6662b3720a7bd343554d66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096240372155642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e
78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03670fd92c8a80c9d88e88b722428ce8ea7ed15a32a25c8c4c948685c15fe41c,PodSandboxId:fa074de98ab0bb7558595bb7900fab097f2fa4cf091ae0c9ed5fd5c899cc2044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17270962
28373682156,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b6ad938698e107c07b01a67dcc4f6f6f2895a6b2ddc7a269056adab117c0ce,PodSandboxId:8efd7c52e41eb6dd5b30df6dc0b133cb2ffabe08abf473da0e79edcf137bc745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727096228199432737,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5095373416a8e45324449515c2fa18882a4b643648236860681c27f7f589bdb,PodSandboxId:a65df228e8bfd8d4d6a9b85c6cbab162a4a128e8612cbb781b68b21b0f017fe2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727096218413186299,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 517a285369c2d468692e1e5ab2e508d6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfbdbe2c35f63b185f28992c717601392287e693216d7332cfd0b4b6597c8ad,PodSandboxId:46a49b5018b58cc60ab2c080f685d00c187e33e4c7790af775ed5baf71aefdca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727096215629421014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c9e8fb5e944bc800446956248067c039e5c452de2651adf100841c5f062a431,PodSandboxId:e4cdc1cb583f42c1cf64e136ebe20075107963fc13da9144c568b67897e7e8a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727096215612519576,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c28bf3f4d80d4048804c687d1cec38aff92ff01ac7556fbe59fd2c73324b333,PodSandboxId:66109e91b1f789d247a6b16e21533a1c912ebdf0386ca6f2b2a221f5a873a754,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727096215567156911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476ad705f89683694506883a4ac379c2339d6097875e3a88c66a078cec041492,PodSandboxId:d5fd7dbc75ab3b9c7a6cdfac29a7ad6d6d093ed1004322d9f8640bbfe66c5388,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727096215548571609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ee80cbf-65bd-4818-bbad-e32736b344f7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:39 ha-097312 crio[666]: time="2024-09-23 13:03:39.162099908Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6e1cb4ab-640f-4eb3-aa9f-e0556b051ac2 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:03:39 ha-097312 crio[666]: time="2024-09-23 13:03:39.162209186Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6e1cb4ab-640f-4eb3-aa9f-e0556b051ac2 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:03:39 ha-097312 crio[666]: time="2024-09-23 13:03:39.163491092Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e4ef04b0-d414-4775-8eef-04ee07baf385 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:03:39 ha-097312 crio[666]: time="2024-09-23 13:03:39.164060929Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096619164035343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e4ef04b0-d414-4775-8eef-04ee07baf385 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:03:39 ha-097312 crio[666]: time="2024-09-23 13:03:39.164551166Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d9dd1911-230e-4a71-994d-c9dff2bfab27 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:39 ha-097312 crio[666]: time="2024-09-23 13:03:39.164656939Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d9dd1911-230e-4a71-994d-c9dff2bfab27 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:39 ha-097312 crio[666]: time="2024-09-23 13:03:39.164947422Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0c8b3d3e1c9604dd8d7d45c15c2a91a759a62f04a047e5626d57a757a396bd4b,PodSandboxId:01a99cef826dda6f2b65d379c041e96505aa2085b58dd4630a3ae2c0052d503b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727096387328810156,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070d45bce8ff98c35a7d8c06328c902bd260bbcd49c6d8b65acf5f2fe3670f05,PodSandboxId:287ae69fbba66da4b73f16d080fbf336ffcfc42104571090400deb8b10a0a4f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727096240448358828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6494b72ca963ec5a21179322ce5a1a3cd2ecf6063d12290ea8c06659ede25828,PodSandboxId:09f40d2b506132af296453dc4125d2ff70d789a87f1da351ae25a90c863e1c5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096240450387241,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cead05960724ef0a7c164689c7f077c5173bf75483e09a02ea44bf3b5dde8cab,PodSandboxId:d6346e81a93e3ab149256d0f37fd69af6c44f91e6e6662b3720a7bd343554d66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096240372155642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e
78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03670fd92c8a80c9d88e88b722428ce8ea7ed15a32a25c8c4c948685c15fe41c,PodSandboxId:fa074de98ab0bb7558595bb7900fab097f2fa4cf091ae0c9ed5fd5c899cc2044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17270962
28373682156,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b6ad938698e107c07b01a67dcc4f6f6f2895a6b2ddc7a269056adab117c0ce,PodSandboxId:8efd7c52e41eb6dd5b30df6dc0b133cb2ffabe08abf473da0e79edcf137bc745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727096228199432737,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5095373416a8e45324449515c2fa18882a4b643648236860681c27f7f589bdb,PodSandboxId:a65df228e8bfd8d4d6a9b85c6cbab162a4a128e8612cbb781b68b21b0f017fe2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727096218413186299,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 517a285369c2d468692e1e5ab2e508d6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfbdbe2c35f63b185f28992c717601392287e693216d7332cfd0b4b6597c8ad,PodSandboxId:46a49b5018b58cc60ab2c080f685d00c187e33e4c7790af775ed5baf71aefdca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727096215629421014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c9e8fb5e944bc800446956248067c039e5c452de2651adf100841c5f062a431,PodSandboxId:e4cdc1cb583f42c1cf64e136ebe20075107963fc13da9144c568b67897e7e8a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727096215612519576,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c28bf3f4d80d4048804c687d1cec38aff92ff01ac7556fbe59fd2c73324b333,PodSandboxId:66109e91b1f789d247a6b16e21533a1c912ebdf0386ca6f2b2a221f5a873a754,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727096215567156911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476ad705f89683694506883a4ac379c2339d6097875e3a88c66a078cec041492,PodSandboxId:d5fd7dbc75ab3b9c7a6cdfac29a7ad6d6d093ed1004322d9f8640bbfe66c5388,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727096215548571609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d9dd1911-230e-4a71-994d-c9dff2bfab27 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:39 ha-097312 crio[666]: time="2024-09-23 13:03:39.205787743Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d0c9cd2-88d7-41d6-a6ee-e1d3c0bef9a8 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:03:39 ha-097312 crio[666]: time="2024-09-23 13:03:39.205884193Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d0c9cd2-88d7-41d6-a6ee-e1d3c0bef9a8 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:03:39 ha-097312 crio[666]: time="2024-09-23 13:03:39.207439499Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b3691831-6f30-4095-870c-f3cc2f6a1b1a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:03:39 ha-097312 crio[666]: time="2024-09-23 13:03:39.207909100Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096619207885449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3691831-6f30-4095-870c-f3cc2f6a1b1a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:03:39 ha-097312 crio[666]: time="2024-09-23 13:03:39.208554409Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a798f6cb-82ea-4097-9e5f-d434816f086e name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:39 ha-097312 crio[666]: time="2024-09-23 13:03:39.208655187Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a798f6cb-82ea-4097-9e5f-d434816f086e name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:03:39 ha-097312 crio[666]: time="2024-09-23 13:03:39.208919054Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0c8b3d3e1c9604dd8d7d45c15c2a91a759a62f04a047e5626d57a757a396bd4b,PodSandboxId:01a99cef826dda6f2b65d379c041e96505aa2085b58dd4630a3ae2c0052d503b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727096387328810156,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070d45bce8ff98c35a7d8c06328c902bd260bbcd49c6d8b65acf5f2fe3670f05,PodSandboxId:287ae69fbba66da4b73f16d080fbf336ffcfc42104571090400deb8b10a0a4f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727096240448358828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6494b72ca963ec5a21179322ce5a1a3cd2ecf6063d12290ea8c06659ede25828,PodSandboxId:09f40d2b506132af296453dc4125d2ff70d789a87f1da351ae25a90c863e1c5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096240450387241,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cead05960724ef0a7c164689c7f077c5173bf75483e09a02ea44bf3b5dde8cab,PodSandboxId:d6346e81a93e3ab149256d0f37fd69af6c44f91e6e6662b3720a7bd343554d66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096240372155642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e
78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03670fd92c8a80c9d88e88b722428ce8ea7ed15a32a25c8c4c948685c15fe41c,PodSandboxId:fa074de98ab0bb7558595bb7900fab097f2fa4cf091ae0c9ed5fd5c899cc2044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17270962
28373682156,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b6ad938698e107c07b01a67dcc4f6f6f2895a6b2ddc7a269056adab117c0ce,PodSandboxId:8efd7c52e41eb6dd5b30df6dc0b133cb2ffabe08abf473da0e79edcf137bc745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727096228199432737,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5095373416a8e45324449515c2fa18882a4b643648236860681c27f7f589bdb,PodSandboxId:a65df228e8bfd8d4d6a9b85c6cbab162a4a128e8612cbb781b68b21b0f017fe2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727096218413186299,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 517a285369c2d468692e1e5ab2e508d6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfbdbe2c35f63b185f28992c717601392287e693216d7332cfd0b4b6597c8ad,PodSandboxId:46a49b5018b58cc60ab2c080f685d00c187e33e4c7790af775ed5baf71aefdca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727096215629421014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c9e8fb5e944bc800446956248067c039e5c452de2651adf100841c5f062a431,PodSandboxId:e4cdc1cb583f42c1cf64e136ebe20075107963fc13da9144c568b67897e7e8a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727096215612519576,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c28bf3f4d80d4048804c687d1cec38aff92ff01ac7556fbe59fd2c73324b333,PodSandboxId:66109e91b1f789d247a6b16e21533a1c912ebdf0386ca6f2b2a221f5a873a754,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727096215567156911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476ad705f89683694506883a4ac379c2339d6097875e3a88c66a078cec041492,PodSandboxId:d5fd7dbc75ab3b9c7a6cdfac29a7ad6d6d093ed1004322d9f8640bbfe66c5388,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727096215548571609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a798f6cb-82ea-4097-9e5f-d434816f086e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0c8b3d3e1c960       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   01a99cef826dd       busybox-7dff88458-4rksx
	6494b72ca963e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   09f40d2b50613       coredns-7c65d6cfc9-txcxz
	070d45bce8ff9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   287ae69fbba66       storage-provisioner
	cead05960724e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   d6346e81a93e3       coredns-7c65d6cfc9-6g9x2
	03670fd92c8a8       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   fa074de98ab0b       kindnet-j8l5t
	37b6ad938698e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   8efd7c52e41eb       kube-proxy-drj8m
	e5095373416a8       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   a65df228e8bfd       kube-vip-ha-097312
	9bfbdbe2c35f6       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   46a49b5018b58       etcd-ha-097312
	5c9e8fb5e944b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   e4cdc1cb583f4       kube-scheduler-ha-097312
	1c28bf3f4d80d       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   66109e91b1f78       kube-apiserver-ha-097312
	476ad705f8968       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   d5fd7dbc75ab3       kube-controller-manager-ha-097312
	
	
	==> coredns [6494b72ca963ec5a21179322ce5a1a3cd2ecf6063d12290ea8c06659ede25828] <==
	[INFO] 10.244.1.2:45817 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000653057s
	[INFO] 10.244.1.2:52272 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.003009815s
	[INFO] 10.244.0.4:33030 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115409s
	[INFO] 10.244.0.4:45577 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003386554s
	[INFO] 10.244.0.4:34507 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148722s
	[INFO] 10.244.0.4:56395 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000159124s
	[INFO] 10.244.2.2:48128 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168767s
	[INFO] 10.244.2.2:38686 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001366329s
	[INFO] 10.244.2.2:54280 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098386s
	[INFO] 10.244.2.2:36178 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083893s
	[INFO] 10.244.1.2:36479 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151724s
	[INFO] 10.244.1.2:52581 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000183399s
	[INFO] 10.244.1.2:36358 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00015472s
	[INFO] 10.244.0.4:37418 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198313s
	[INFO] 10.244.2.2:52660 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011216s
	[INFO] 10.244.1.2:33460 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123493s
	[INFO] 10.244.1.2:42619 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000187646s
	[INFO] 10.244.0.4:50282 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110854s
	[INFO] 10.244.0.4:48865 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000169177s
	[INFO] 10.244.0.4:52671 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110814s
	[INFO] 10.244.2.2:49013 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000236486s
	[INFO] 10.244.2.2:37600 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000236051s
	[INFO] 10.244.2.2:54687 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000137539s
	[INFO] 10.244.1.2:37754 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000237319s
	[INFO] 10.244.1.2:50571 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000167449s
	
	
	==> coredns [cead05960724ef0a7c164689c7f077c5173bf75483e09a02ea44bf3b5dde8cab] <==
	[INFO] 10.244.0.4:37338 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004244948s
	[INFO] 10.244.0.4:45643 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000226629s
	[INFO] 10.244.0.4:55589 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138142s
	[INFO] 10.244.0.4:39714 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089285s
	[INFO] 10.244.2.2:36050 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198766s
	[INFO] 10.244.2.2:57929 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002002291s
	[INFO] 10.244.2.2:39920 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000241567s
	[INFO] 10.244.2.2:40496 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084082s
	[INFO] 10.244.1.2:53956 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001953841s
	[INFO] 10.244.1.2:39693 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161735s
	[INFO] 10.244.1.2:59255 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001392042s
	[INFO] 10.244.1.2:33162 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000137674s
	[INFO] 10.244.1.2:56819 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135224s
	[INFO] 10.244.0.4:58065 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142108s
	[INFO] 10.244.0.4:49950 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114547s
	[INFO] 10.244.0.4:48467 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051186s
	[INFO] 10.244.2.2:57485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120774s
	[INFO] 10.244.2.2:47368 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105596s
	[INFO] 10.244.2.2:52953 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077623s
	[INFO] 10.244.1.2:45470 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011128s
	[INFO] 10.244.1.2:35601 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000157053s
	[INFO] 10.244.0.4:60925 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000610878s
	[INFO] 10.244.2.2:48335 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000176802s
	[INFO] 10.244.1.2:39758 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190843s
	[INFO] 10.244.1.2:35713 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110523s
	
	
	==> describe nodes <==
	Name:               ha-097312
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-097312
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-097312
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T12_57_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:57:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-097312
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:03:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:00:05 +0000   Mon, 23 Sep 2024 12:57:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:00:05 +0000   Mon, 23 Sep 2024 12:57:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:00:05 +0000   Mon, 23 Sep 2024 12:57:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:00:05 +0000   Mon, 23 Sep 2024 12:57:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.160
	  Hostname:    ha-097312
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fef43eb48e8a42b5815ed7c921d42333
	  System UUID:                fef43eb4-8e8a-42b5-815e-d7c921d42333
	  Boot ID:                    22749ef5-5a8a-4d9f-b42e-96dd2d4e32eb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4rksx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 coredns-7c65d6cfc9-6g9x2             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m33s
	  kube-system                 coredns-7c65d6cfc9-txcxz             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m33s
	  kube-system                 etcd-ha-097312                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m37s
	  kube-system                 kindnet-j8l5t                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m33s
	  kube-system                 kube-apiserver-ha-097312             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-controller-manager-ha-097312    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-proxy-drj8m                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-scheduler-ha-097312             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-vip-ha-097312                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m30s  kube-proxy       
	  Normal  Starting                 6m37s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m37s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m37s  kubelet          Node ha-097312 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m37s  kubelet          Node ha-097312 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m37s  kubelet          Node ha-097312 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m34s  node-controller  Node ha-097312 event: Registered Node ha-097312 in Controller
	  Normal  NodeReady                6m20s  kubelet          Node ha-097312 status is now: NodeReady
	  Normal  RegisteredNode           5m34s  node-controller  Node ha-097312 event: Registered Node ha-097312 in Controller
	  Normal  RegisteredNode           4m16s  node-controller  Node ha-097312 event: Registered Node ha-097312 in Controller
	
	
	Name:               ha-097312-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-097312-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-097312
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T12_57_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:57:57 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-097312-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:01:01 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 23 Sep 2024 12:59:59 +0000   Mon, 23 Sep 2024 13:01:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 23 Sep 2024 12:59:59 +0000   Mon, 23 Sep 2024 13:01:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 23 Sep 2024 12:59:59 +0000   Mon, 23 Sep 2024 13:01:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 23 Sep 2024 12:59:59 +0000   Mon, 23 Sep 2024 13:01:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.214
	  Hostname:    ha-097312-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 226ea4f6db5b44f7bdab73033cb7ae33
	  System UUID:                226ea4f6-db5b-44f7-bdab-73033cb7ae33
	  Boot ID:                    8cb64dab-25d7-4dcd-9c08-1dcc2d214767
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wz97n                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 etcd-ha-097312-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m40s
	  kube-system                 kindnet-hcclj                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m42s
	  kube-system                 kube-apiserver-ha-097312-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-controller-manager-ha-097312-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-proxy-z6ss5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m42s
	  kube-system                 kube-scheduler-ha-097312-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-vip-ha-097312-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m38s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m42s (x8 over 5m43s)  kubelet          Node ha-097312-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m42s (x8 over 5m43s)  kubelet          Node ha-097312-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m42s (x7 over 5m43s)  kubelet          Node ha-097312-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m39s                  node-controller  Node ha-097312-m02 event: Registered Node ha-097312-m02 in Controller
	  Normal  RegisteredNode           5m34s                  node-controller  Node ha-097312-m02 event: Registered Node ha-097312-m02 in Controller
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-097312-m02 event: Registered Node ha-097312-m02 in Controller
	  Normal  NodeNotReady             116s                   node-controller  Node ha-097312-m02 status is now: NodeNotReady
	
	
	Name:               ha-097312-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-097312-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-097312
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T12_59_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:59:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-097312-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:03:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:00:15 +0000   Mon, 23 Sep 2024 12:59:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:00:15 +0000   Mon, 23 Sep 2024 12:59:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:00:15 +0000   Mon, 23 Sep 2024 12:59:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:00:15 +0000   Mon, 23 Sep 2024 12:59:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.174
	  Hostname:    ha-097312-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 21b2a00385684360824371ae7a980598
	  System UUID:                21b2a003-8568-4360-8243-71ae7a980598
	  Boot ID:                    960c8b17-8be2-4e75-85e5-dc8c84a6f034
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-tx8b9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 etcd-ha-097312-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m23s
	  kube-system                 kindnet-lcrdg                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m25s
	  kube-system                 kube-apiserver-ha-097312-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-controller-manager-ha-097312-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-proxy-vs524                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 kube-scheduler-ha-097312-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-vip-ha-097312-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m20s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m25s (x8 over 4m25s)  kubelet          Node ha-097312-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m25s (x8 over 4m25s)  kubelet          Node ha-097312-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m25s (x7 over 4m25s)  kubelet          Node ha-097312-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m24s                  node-controller  Node ha-097312-m03 event: Registered Node ha-097312-m03 in Controller
	  Normal  RegisteredNode           4m24s                  node-controller  Node ha-097312-m03 event: Registered Node ha-097312-m03 in Controller
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-097312-m03 event: Registered Node ha-097312-m03 in Controller
	
	
	Name:               ha-097312-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-097312-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-097312
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T13_00_25_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 13:00:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-097312-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:03:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:00:55 +0000   Mon, 23 Sep 2024 13:00:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:00:55 +0000   Mon, 23 Sep 2024 13:00:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:00:55 +0000   Mon, 23 Sep 2024 13:00:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:00:55 +0000   Mon, 23 Sep 2024 13:00:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.20
	  Hostname:    ha-097312-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 23903b49596849ed8163495c455231a4
	  System UUID:                23903b49-5968-49ed-8163-495c455231a4
	  Boot ID:                    b209787f-e977-446d-9180-ea83c0a28142
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pzs94       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m14s
	  kube-system                 kube-proxy-7hlnw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m9s                   kube-proxy       
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-097312-m04 event: Registered Node ha-097312-m04 in Controller
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-097312-m04 event: Registered Node ha-097312-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m14s (x2 over 3m15s)  kubelet          Node ha-097312-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m14s (x2 over 3m15s)  kubelet          Node ha-097312-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m14s (x2 over 3m15s)  kubelet          Node ha-097312-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-097312-m04 event: Registered Node ha-097312-m04 in Controller
	  Normal  NodeReady                2m54s                  kubelet          Node ha-097312-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep23 12:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052097] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038111] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.768653] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.021290] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.561361] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.704633] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.056129] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055848] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.170191] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.146996] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.300750] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +3.930853] systemd-fstab-generator[752]: Ignoring "noauto" option for root device
	[  +3.791133] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.059635] kauditd_printk_skb: 158 callbacks suppressed
	[Sep23 12:57] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.088641] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.268527] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.165221] kauditd_printk_skb: 38 callbacks suppressed
	[Sep23 12:58] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [9bfbdbe2c35f63b185f28992c717601392287e693216d7332cfd0b4b6597c8ad] <==
	{"level":"warn","ts":"2024-09-23T13:03:39.488167Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:39.495276Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:39.499204Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:39.508353Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:39.510569Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:39.517523Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:39.527146Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:39.531568Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:39.535863Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:39.542375Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:39.551892Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:39.559590Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:39.564299Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:39.568676Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:39.576468Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:39.583733Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:39.585433Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:39.586118Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:39.599307Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:39.605561Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:39.609798Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:39.614920Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:39.621899Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:39.628594Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T13:03:39.683497Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b56431cc78e971c","from":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 13:03:39 up 7 min,  0 users,  load average: 0.21, 0.25, 0.13
	Linux ha-097312 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [03670fd92c8a80c9d88e88b722428ce8ea7ed15a32a25c8c4c948685c15fe41c] <==
	I0923 13:03:09.636075       1 main.go:322] Node ha-097312-m04 has CIDR [10.244.3.0/24] 
	I0923 13:03:19.639090       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0923 13:03:19.639126       1 main.go:299] handling current node
	I0923 13:03:19.639140       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0923 13:03:19.639145       1 main.go:322] Node ha-097312-m02 has CIDR [10.244.1.0/24] 
	I0923 13:03:19.639271       1 main.go:295] Handling node with IPs: map[192.168.39.174:{}]
	I0923 13:03:19.639276       1 main.go:322] Node ha-097312-m03 has CIDR [10.244.2.0/24] 
	I0923 13:03:19.639330       1 main.go:295] Handling node with IPs: map[192.168.39.20:{}]
	I0923 13:03:19.639334       1 main.go:322] Node ha-097312-m04 has CIDR [10.244.3.0/24] 
	I0923 13:03:29.638527       1 main.go:295] Handling node with IPs: map[192.168.39.20:{}]
	I0923 13:03:29.638610       1 main.go:322] Node ha-097312-m04 has CIDR [10.244.3.0/24] 
	I0923 13:03:29.638800       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0923 13:03:29.638822       1 main.go:299] handling current node
	I0923 13:03:29.638844       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0923 13:03:29.638848       1 main.go:322] Node ha-097312-m02 has CIDR [10.244.1.0/24] 
	I0923 13:03:29.638897       1 main.go:295] Handling node with IPs: map[192.168.39.174:{}]
	I0923 13:03:29.638914       1 main.go:322] Node ha-097312-m03 has CIDR [10.244.2.0/24] 
	I0923 13:03:39.643808       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0923 13:03:39.643930       1 main.go:299] handling current node
	I0923 13:03:39.643963       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0923 13:03:39.643985       1 main.go:322] Node ha-097312-m02 has CIDR [10.244.1.0/24] 
	I0923 13:03:39.644128       1 main.go:295] Handling node with IPs: map[192.168.39.174:{}]
	I0923 13:03:39.644167       1 main.go:322] Node ha-097312-m03 has CIDR [10.244.2.0/24] 
	I0923 13:03:39.644254       1 main.go:295] Handling node with IPs: map[192.168.39.20:{}]
	I0923 13:03:39.644278       1 main.go:322] Node ha-097312-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [1c28bf3f4d80d4048804c687d1cec38aff92ff01ac7556fbe59fd2c73324b333] <==
	I0923 12:57:02.020359       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0923 12:57:02.088327       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0923 12:57:06.152802       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0923 12:57:06.755775       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0923 12:57:57.925529       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0923 12:57:57.925590       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 6.353µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0923 12:57:57.926736       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0923 12:57:57.927891       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0923 12:57:57.929106       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.691541ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0923 12:59:48.392448       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33954: use of closed network connection
	E0923 12:59:48.613880       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33958: use of closed network connection
	E0923 12:59:48.808088       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58634: use of closed network connection
	E0923 12:59:49.001780       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58648: use of closed network connection
	E0923 12:59:49.197483       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58666: use of closed network connection
	E0923 12:59:49.377774       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58694: use of closed network connection
	E0923 12:59:49.575983       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58712: use of closed network connection
	E0923 12:59:49.768426       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58734: use of closed network connection
	E0923 12:59:49.967451       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58756: use of closed network connection
	E0923 12:59:50.265392       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58784: use of closed network connection
	E0923 12:59:50.450981       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58804: use of closed network connection
	E0923 12:59:50.652809       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58810: use of closed network connection
	E0923 12:59:50.861752       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58822: use of closed network connection
	E0923 12:59:51.064797       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58830: use of closed network connection
	E0923 12:59:51.264921       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58846: use of closed network connection
	W0923 13:01:20.906998       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.160 192.168.39.174]
	
	
	==> kube-controller-manager [476ad705f89683694506883a4ac379c2339d6097875e3a88c66a078cec041492] <==
	I0923 13:00:25.249956       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-097312-m04" podCIDRs=["10.244.3.0/24"]
	I0923 13:00:25.250021       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:25.250063       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:25.268205       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:25.370449       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:25.456902       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:25.813447       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:25.983304       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-097312-m04"
	I0923 13:00:25.983773       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:26.090111       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:28.408814       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:28.484815       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:35.660172       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:45.897287       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:45.897415       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-097312-m04"
	I0923 13:00:45.912394       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:46.005249       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:00:55.964721       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:01:43.436073       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m02"
	I0923 13:01:43.436177       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-097312-m04"
	I0923 13:01:43.460744       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m02"
	I0923 13:01:43.587511       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.511152ms"
	I0923 13:01:43.588537       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.099µs"
	I0923 13:01:46.104982       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m02"
	I0923 13:01:48.741428       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m02"
	
	
	==> kube-proxy [37b6ad938698e107c07b01a67dcc4f6f6f2895a6b2ddc7a269056adab117c0ce] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 12:57:08.497927       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 12:57:08.513689       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.160"]
	E0923 12:57:08.513839       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 12:57:08.553172       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 12:57:08.553258       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 12:57:08.553295       1 server_linux.go:169] "Using iptables Proxier"
	I0923 12:57:08.556859       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 12:57:08.557876       1 server.go:483] "Version info" version="v1.31.1"
	I0923 12:57:08.557939       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 12:57:08.564961       1 config.go:199] "Starting service config controller"
	I0923 12:57:08.565367       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 12:57:08.565715       1 config.go:328] "Starting node config controller"
	I0923 12:57:08.570600       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 12:57:08.566364       1 config.go:105] "Starting endpoint slice config controller"
	I0923 12:57:08.570712       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 12:57:08.570719       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 12:57:08.666413       1 shared_informer.go:320] Caches are synced for service config
	I0923 12:57:08.670755       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5c9e8fb5e944bc800446956248067c039e5c452de2651adf100841c5f062a431] <==
	W0923 12:57:00.057793       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 12:57:00.058398       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.080608       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 12:57:00.080826       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.112818       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 12:57:00.112990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.129261       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 12:57:00.129830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.181934       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 12:57:00.182022       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.183285       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 12:57:00.183358       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.190093       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 12:57:00.190177       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.223708       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 12:57:00.223794       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.255027       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 12:57:00.255136       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.582968       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 12:57:00.583073       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0923 12:57:02.534371       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0923 12:59:14.854178       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-vs524\": pod kube-proxy-vs524 is already assigned to node \"ha-097312-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-vs524" node="ha-097312-m03"
	E0923 12:59:14.854357       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 92738649-c52b-44d5-866b-8cda751a538c(kube-system/kube-proxy-vs524) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-vs524"
	E0923 12:59:14.854394       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-vs524\": pod kube-proxy-vs524 is already assigned to node \"ha-097312-m03\"" pod="kube-system/kube-proxy-vs524"
	I0923 12:59:14.854436       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-vs524" node="ha-097312-m03"
	
	
	==> kubelet <==
	Sep 23 13:02:02 ha-097312 kubelet[1304]: E0923 13:02:02.214007    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096522213607138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:02 ha-097312 kubelet[1304]: E0923 13:02:02.214059    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096522213607138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:12 ha-097312 kubelet[1304]: E0923 13:02:12.219070    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096532215431820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:12 ha-097312 kubelet[1304]: E0923 13:02:12.219206    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096532215431820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:22 ha-097312 kubelet[1304]: E0923 13:02:22.225821    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096542223481825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:22 ha-097312 kubelet[1304]: E0923 13:02:22.230227    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096542223481825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:32 ha-097312 kubelet[1304]: E0923 13:02:32.232689    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096552232228787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:32 ha-097312 kubelet[1304]: E0923 13:02:32.233031    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096552232228787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:42 ha-097312 kubelet[1304]: E0923 13:02:42.235021    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096562234565302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:42 ha-097312 kubelet[1304]: E0923 13:02:42.235083    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096562234565302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:52 ha-097312 kubelet[1304]: E0923 13:02:52.237647    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096572237152536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:02:52 ha-097312 kubelet[1304]: E0923 13:02:52.237938    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096572237152536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:03:02 ha-097312 kubelet[1304]: E0923 13:03:02.165544    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 13:03:02 ha-097312 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 13:03:02 ha-097312 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 13:03:02 ha-097312 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 13:03:02 ha-097312 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 13:03:02 ha-097312 kubelet[1304]: E0923 13:03:02.240514    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096582240150204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:03:02 ha-097312 kubelet[1304]: E0923 13:03:02.240606    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096582240150204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:03:12 ha-097312 kubelet[1304]: E0923 13:03:12.243234    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096592242789885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:03:12 ha-097312 kubelet[1304]: E0923 13:03:12.243281    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096592242789885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:03:22 ha-097312 kubelet[1304]: E0923 13:03:22.245580    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096602245012698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:03:22 ha-097312 kubelet[1304]: E0923 13:03:22.246002    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096602245012698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:03:32 ha-097312 kubelet[1304]: E0923 13:03:32.247916    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096612247450002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:03:32 ha-097312 kubelet[1304]: E0923 13:03:32.247947    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096612247450002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-097312 -n ha-097312
helpers_test.go:261: (dbg) Run:  kubectl --context ha-097312 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (397.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-097312 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-097312 -v=7 --alsologtostderr
E0923 13:05:29.177650  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:05:36.850319  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-097312 -v=7 --alsologtostderr: exit status 82 (2m1.909948891s)

                                                
                                                
-- stdout --
	* Stopping node "ha-097312-m04"  ...
	* Stopping node "ha-097312-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 13:03:44.778004  687579 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:03:44.778131  687579 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:03:44.778142  687579 out.go:358] Setting ErrFile to fd 2...
	I0923 13:03:44.778148  687579 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:03:44.778403  687579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-662205/.minikube/bin
	I0923 13:03:44.778643  687579 out.go:352] Setting JSON to false
	I0923 13:03:44.778732  687579 mustload.go:65] Loading cluster: ha-097312
	I0923 13:03:44.779145  687579 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:03:44.779235  687579 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 13:03:44.779426  687579 mustload.go:65] Loading cluster: ha-097312
	I0923 13:03:44.779558  687579 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:03:44.779588  687579 stop.go:39] StopHost: ha-097312-m04
	I0923 13:03:44.779956  687579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:03:44.780009  687579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:03:44.795588  687579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35453
	I0923 13:03:44.796261  687579 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:03:44.796894  687579 main.go:141] libmachine: Using API Version  1
	I0923 13:03:44.796911  687579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:03:44.797312  687579 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:03:44.799950  687579 out.go:177] * Stopping node "ha-097312-m04"  ...
	I0923 13:03:44.801405  687579 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0923 13:03:44.801445  687579 main.go:141] libmachine: (ha-097312-m04) Calling .DriverName
	I0923 13:03:44.801748  687579 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0923 13:03:44.801779  687579 main.go:141] libmachine: (ha-097312-m04) Calling .GetSSHHostname
	I0923 13:03:44.804777  687579 main.go:141] libmachine: (ha-097312-m04) DBG | domain ha-097312-m04 has defined MAC address 52:54:00:b7:b6:3b in network mk-ha-097312
	I0923 13:03:44.805169  687579 main.go:141] libmachine: (ha-097312-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b6:3b", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 14:00:08 +0000 UTC Type:0 Mac:52:54:00:b7:b6:3b Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-097312-m04 Clientid:01:52:54:00:b7:b6:3b}
	I0923 13:03:44.805209  687579 main.go:141] libmachine: (ha-097312-m04) DBG | domain ha-097312-m04 has defined IP address 192.168.39.20 and MAC address 52:54:00:b7:b6:3b in network mk-ha-097312
	I0923 13:03:44.805366  687579 main.go:141] libmachine: (ha-097312-m04) Calling .GetSSHPort
	I0923 13:03:44.805557  687579 main.go:141] libmachine: (ha-097312-m04) Calling .GetSSHKeyPath
	I0923 13:03:44.805703  687579 main.go:141] libmachine: (ha-097312-m04) Calling .GetSSHUsername
	I0923 13:03:44.805851  687579 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m04/id_rsa Username:docker}
	I0923 13:03:44.904000  687579 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0923 13:03:44.958392  687579 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0923 13:03:45.015251  687579 main.go:141] libmachine: Stopping "ha-097312-m04"...
	I0923 13:03:45.015313  687579 main.go:141] libmachine: (ha-097312-m04) Calling .GetState
	I0923 13:03:45.017027  687579 main.go:141] libmachine: (ha-097312-m04) Calling .Stop
	I0923 13:03:45.021552  687579 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 0/120
	I0923 13:03:46.177534  687579 main.go:141] libmachine: (ha-097312-m04) Calling .GetState
	I0923 13:03:46.178876  687579 main.go:141] libmachine: Machine "ha-097312-m04" was stopped.
	I0923 13:03:46.178921  687579 stop.go:75] duration metric: took 1.377500238s to stop
	I0923 13:03:46.178950  687579 stop.go:39] StopHost: ha-097312-m03
	I0923 13:03:46.179346  687579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:03:46.179403  687579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:03:46.195114  687579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32995
	I0923 13:03:46.195605  687579 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:03:46.196224  687579 main.go:141] libmachine: Using API Version  1
	I0923 13:03:46.196248  687579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:03:46.196673  687579 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:03:46.200086  687579 out.go:177] * Stopping node "ha-097312-m03"  ...
	I0923 13:03:46.201659  687579 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0923 13:03:46.201718  687579 main.go:141] libmachine: (ha-097312-m03) Calling .DriverName
	I0923 13:03:46.202074  687579 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0923 13:03:46.202105  687579 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHHostname
	I0923 13:03:46.205711  687579 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 13:03:46.206227  687579 main.go:141] libmachine: (ha-097312-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:fc:65", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:58:39 +0000 UTC Type:0 Mac:52:54:00:39:fc:65 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-097312-m03 Clientid:01:52:54:00:39:fc:65}
	I0923 13:03:46.206265  687579 main.go:141] libmachine: (ha-097312-m03) DBG | domain ha-097312-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:39:fc:65 in network mk-ha-097312
	I0923 13:03:46.206426  687579 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHPort
	I0923 13:03:46.206660  687579 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHKeyPath
	I0923 13:03:46.206815  687579 main.go:141] libmachine: (ha-097312-m03) Calling .GetSSHUsername
	I0923 13:03:46.206932  687579 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m03/id_rsa Username:docker}
	I0923 13:03:46.299137  687579 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0923 13:03:46.353589  687579 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0923 13:03:46.414711  687579 main.go:141] libmachine: Stopping "ha-097312-m03"...
	I0923 13:03:46.414741  687579 main.go:141] libmachine: (ha-097312-m03) Calling .GetState
	I0923 13:03:46.416665  687579 main.go:141] libmachine: (ha-097312-m03) Calling .Stop
	I0923 13:03:46.420218  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 0/120
	I0923 13:03:47.421715  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 1/120
	I0923 13:03:48.423075  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 2/120
	I0923 13:03:49.424709  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 3/120
	I0923 13:03:50.426133  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 4/120
	I0923 13:03:51.428742  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 5/120
	I0923 13:03:52.430761  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 6/120
	I0923 13:03:53.432278  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 7/120
	I0923 13:03:54.434084  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 8/120
	I0923 13:03:55.436324  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 9/120
	I0923 13:03:56.438535  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 10/120
	I0923 13:03:57.440448  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 11/120
	I0923 13:03:58.441977  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 12/120
	I0923 13:03:59.443549  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 13/120
	I0923 13:04:00.444985  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 14/120
	I0923 13:04:01.447159  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 15/120
	I0923 13:04:02.448805  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 16/120
	I0923 13:04:03.450424  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 17/120
	I0923 13:04:04.452576  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 18/120
	I0923 13:04:05.453912  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 19/120
	I0923 13:04:06.456339  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 20/120
	I0923 13:04:07.458107  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 21/120
	I0923 13:04:08.460642  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 22/120
	I0923 13:04:09.462458  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 23/120
	I0923 13:04:10.464288  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 24/120
	I0923 13:04:11.466333  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 25/120
	I0923 13:04:12.468165  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 26/120
	I0923 13:04:13.469772  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 27/120
	I0923 13:04:14.471403  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 28/120
	I0923 13:04:15.472944  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 29/120
	I0923 13:04:16.474908  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 30/120
	I0923 13:04:17.476651  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 31/120
	I0923 13:04:18.478806  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 32/120
	I0923 13:04:19.480460  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 33/120
	I0923 13:04:20.482131  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 34/120
	I0923 13:04:21.484479  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 35/120
	I0923 13:04:22.486077  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 36/120
	I0923 13:04:23.487873  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 37/120
	I0923 13:04:24.489352  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 38/120
	I0923 13:04:25.490845  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 39/120
	I0923 13:04:26.492809  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 40/120
	I0923 13:04:27.494388  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 41/120
	I0923 13:04:28.495954  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 42/120
	I0923 13:04:29.497426  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 43/120
	I0923 13:04:30.499113  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 44/120
	I0923 13:04:31.501567  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 45/120
	I0923 13:04:32.503557  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 46/120
	I0923 13:04:33.505115  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 47/120
	I0923 13:04:34.506770  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 48/120
	I0923 13:04:35.508646  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 49/120
	I0923 13:04:36.510767  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 50/120
	I0923 13:04:37.512356  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 51/120
	I0923 13:04:38.514075  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 52/120
	I0923 13:04:39.515669  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 53/120
	I0923 13:04:40.517416  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 54/120
	I0923 13:04:41.519244  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 55/120
	I0923 13:04:42.520913  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 56/120
	I0923 13:04:43.522634  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 57/120
	I0923 13:04:44.524517  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 58/120
	I0923 13:04:45.526132  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 59/120
	I0923 13:04:46.527960  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 60/120
	I0923 13:04:47.529788  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 61/120
	I0923 13:04:48.531400  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 62/120
	I0923 13:04:49.532982  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 63/120
	I0923 13:04:50.534808  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 64/120
	I0923 13:04:51.537100  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 65/120
	I0923 13:04:52.538596  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 66/120
	I0923 13:04:53.540932  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 67/120
	I0923 13:04:54.542644  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 68/120
	I0923 13:04:55.544800  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 69/120
	I0923 13:04:56.546647  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 70/120
	I0923 13:04:57.548320  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 71/120
	I0923 13:04:58.550014  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 72/120
	I0923 13:04:59.551788  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 73/120
	I0923 13:05:00.553435  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 74/120
	I0923 13:05:01.555943  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 75/120
	I0923 13:05:02.557645  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 76/120
	I0923 13:05:03.559131  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 77/120
	I0923 13:05:04.560683  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 78/120
	I0923 13:05:05.562275  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 79/120
	I0923 13:05:06.564323  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 80/120
	I0923 13:05:07.565797  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 81/120
	I0923 13:05:08.567536  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 82/120
	I0923 13:05:09.569205  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 83/120
	I0923 13:05:10.570766  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 84/120
	I0923 13:05:11.572750  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 85/120
	I0923 13:05:12.574060  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 86/120
	I0923 13:05:13.575524  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 87/120
	I0923 13:05:14.576968  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 88/120
	I0923 13:05:15.578555  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 89/120
	I0923 13:05:16.581170  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 90/120
	I0923 13:05:17.582588  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 91/120
	I0923 13:05:18.583979  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 92/120
	I0923 13:05:19.586261  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 93/120
	I0923 13:05:20.587549  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 94/120
	I0923 13:05:21.589338  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 95/120
	I0923 13:05:22.591154  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 96/120
	I0923 13:05:23.592691  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 97/120
	I0923 13:05:24.594339  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 98/120
	I0923 13:05:25.595843  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 99/120
	I0923 13:05:26.597622  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 100/120
	I0923 13:05:27.599141  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 101/120
	I0923 13:05:28.600626  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 102/120
	I0923 13:05:29.602135  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 103/120
	I0923 13:05:30.604057  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 104/120
	I0923 13:05:31.606545  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 105/120
	I0923 13:05:32.608329  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 106/120
	I0923 13:05:33.609872  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 107/120
	I0923 13:05:34.611518  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 108/120
	I0923 13:05:35.612973  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 109/120
	I0923 13:05:36.614767  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 110/120
	I0923 13:05:37.616481  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 111/120
	I0923 13:05:38.618025  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 112/120
	I0923 13:05:39.619676  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 113/120
	I0923 13:05:40.621111  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 114/120
	I0923 13:05:41.622755  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 115/120
	I0923 13:05:42.624510  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 116/120
	I0923 13:05:43.626259  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 117/120
	I0923 13:05:44.627983  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 118/120
	I0923 13:05:45.630300  687579 main.go:141] libmachine: (ha-097312-m03) Waiting for machine to stop 119/120
	I0923 13:05:46.630864  687579 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0923 13:05:46.630933  687579 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0923 13:05:46.633258  687579 out.go:201] 
	W0923 13:05:46.635246  687579 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0923 13:05:46.635269  687579 out.go:270] * 
	* 
	W0923 13:05:46.638529  687579 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 13:05:46.641570  687579 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-097312 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-097312 --wait=true -v=7 --alsologtostderr
E0923 13:05:56.883591  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:06:59.918786  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-097312 --wait=true -v=7 --alsologtostderr: (4m32.53884599s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-097312
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-097312 -n ha-097312
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-097312 logs -n 25: (2.165244018s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-097312 cp ha-097312-m03:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m02:/home/docker/cp-test_ha-097312-m03_ha-097312-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n ha-097312-m02 sudo cat                                          | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m03_ha-097312-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m03:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04:/home/docker/cp-test_ha-097312-m03_ha-097312-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n ha-097312-m04 sudo cat                                          | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m03_ha-097312-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-097312 cp testdata/cp-test.txt                                                | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m04:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3809348295/001/cp-test_ha-097312-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m04:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312:/home/docker/cp-test_ha-097312-m04_ha-097312.txt                       |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n ha-097312 sudo cat                                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m04_ha-097312.txt                                 |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m04:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m02:/home/docker/cp-test_ha-097312-m04_ha-097312-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n ha-097312-m02 sudo cat                                          | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m04_ha-097312-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m04:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m03:/home/docker/cp-test_ha-097312-m04_ha-097312-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n ha-097312-m03 sudo cat                                          | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m04_ha-097312-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-097312 node stop m02 -v=7                                                     | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-097312 node start m02 -v=7                                                    | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-097312 -v=7                                                           | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-097312 -v=7                                                                | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-097312 --wait=true -v=7                                                    | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:05 UTC | 23 Sep 24 13:10 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-097312                                                                | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:10 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 13:05:46
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 13:05:46.696980  688055 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:05:46.697121  688055 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:05:46.697131  688055 out.go:358] Setting ErrFile to fd 2...
	I0923 13:05:46.697136  688055 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:05:46.697351  688055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-662205/.minikube/bin
	I0923 13:05:46.697990  688055 out.go:352] Setting JSON to false
	I0923 13:05:46.699028  688055 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":10090,"bootTime":1727086657,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 13:05:46.699152  688055 start.go:139] virtualization: kvm guest
	I0923 13:05:46.701525  688055 out.go:177] * [ha-097312] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 13:05:46.703391  688055 notify.go:220] Checking for updates...
	I0923 13:05:46.703416  688055 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 13:05:46.704761  688055 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:05:46.706280  688055 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 13:05:46.707595  688055 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 13:05:46.709085  688055 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 13:05:46.710370  688055 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 13:05:46.712056  688055 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:05:46.712200  688055 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:05:46.712676  688055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:05:46.712739  688055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:05:46.728805  688055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44823
	I0923 13:05:46.729375  688055 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:05:46.730008  688055 main.go:141] libmachine: Using API Version  1
	I0923 13:05:46.730042  688055 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:05:46.730500  688055 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:05:46.730746  688055 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 13:05:46.771029  688055 out.go:177] * Using the kvm2 driver based on existing profile
	I0923 13:05:46.772668  688055 start.go:297] selected driver: kvm2
	I0923 13:05:46.772687  688055 start.go:901] validating driver "kvm2" against &{Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.20 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:05:46.772836  688055 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 13:05:46.773208  688055 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 13:05:46.773321  688055 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19690-662205/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 13:05:46.789171  688055 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 13:05:46.790017  688055 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:05:46.790072  688055 cni.go:84] Creating CNI manager for ""
	I0923 13:05:46.790148  688055 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0923 13:05:46.790213  688055 start.go:340] cluster config:
	{Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-097312 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.20 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:
false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:05:46.790364  688055 iso.go:125] acquiring lock: {Name:mkb968a95eae3838cd5c328cf3385c2ef4ff2c8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 13:05:46.792737  688055 out.go:177] * Starting "ha-097312" primary control-plane node in "ha-097312" cluster
	I0923 13:05:46.794515  688055 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 13:05:46.794580  688055 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 13:05:46.794590  688055 cache.go:56] Caching tarball of preloaded images
	I0923 13:05:46.794686  688055 preload.go:172] Found /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 13:05:46.794697  688055 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 13:05:46.794833  688055 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 13:05:46.795060  688055 start.go:360] acquireMachinesLock for ha-097312: {Name:mka98570d4b4becad22300323f1f88e64743eec3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 13:05:46.795113  688055 start.go:364] duration metric: took 32.448µs to acquireMachinesLock for "ha-097312"
	I0923 13:05:46.795129  688055 start.go:96] Skipping create...Using existing machine configuration
	I0923 13:05:46.795135  688055 fix.go:54] fixHost starting: 
	I0923 13:05:46.795414  688055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:05:46.795450  688055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:05:46.810871  688055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42769
	I0923 13:05:46.811360  688055 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:05:46.811862  688055 main.go:141] libmachine: Using API Version  1
	I0923 13:05:46.811886  688055 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:05:46.812227  688055 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:05:46.812448  688055 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 13:05:46.812616  688055 main.go:141] libmachine: (ha-097312) Calling .GetState
	I0923 13:05:46.814211  688055 fix.go:112] recreateIfNeeded on ha-097312: state=Running err=<nil>
	W0923 13:05:46.814247  688055 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 13:05:46.816590  688055 out.go:177] * Updating the running kvm2 "ha-097312" VM ...
	I0923 13:05:46.818023  688055 machine.go:93] provisionDockerMachine start ...
	I0923 13:05:46.818053  688055 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 13:05:46.818354  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 13:05:46.821479  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:46.822026  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:05:46.822077  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:46.822379  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 13:05:46.822574  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:05:46.822735  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:05:46.822880  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 13:05:46.823173  688055 main.go:141] libmachine: Using SSH client type: native
	I0923 13:05:46.823472  688055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 13:05:46.823488  688055 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 13:05:46.939540  688055 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-097312
	
	I0923 13:05:46.939574  688055 main.go:141] libmachine: (ha-097312) Calling .GetMachineName
	I0923 13:05:46.939836  688055 buildroot.go:166] provisioning hostname "ha-097312"
	I0923 13:05:46.939872  688055 main.go:141] libmachine: (ha-097312) Calling .GetMachineName
	I0923 13:05:46.940043  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 13:05:46.943429  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:46.943929  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:05:46.943968  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:46.944171  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 13:05:46.944386  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:05:46.944599  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:05:46.944731  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 13:05:46.944912  688055 main.go:141] libmachine: Using SSH client type: native
	I0923 13:05:46.945087  688055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 13:05:46.945102  688055 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-097312 && echo "ha-097312" | sudo tee /etc/hostname
	I0923 13:05:47.068753  688055 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-097312
	
	I0923 13:05:47.068784  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 13:05:47.071493  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:47.071935  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:05:47.071970  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:47.072123  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 13:05:47.072333  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:05:47.072531  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:05:47.072685  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 13:05:47.072841  688055 main.go:141] libmachine: Using SSH client type: native
	I0923 13:05:47.073034  688055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 13:05:47.073056  688055 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-097312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-097312/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-097312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 13:05:47.186928  688055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 13:05:47.186966  688055 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19690-662205/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-662205/.minikube}
	I0923 13:05:47.186992  688055 buildroot.go:174] setting up certificates
	I0923 13:05:47.187004  688055 provision.go:84] configureAuth start
	I0923 13:05:47.187015  688055 main.go:141] libmachine: (ha-097312) Calling .GetMachineName
	I0923 13:05:47.187278  688055 main.go:141] libmachine: (ha-097312) Calling .GetIP
	I0923 13:05:47.190282  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:47.190871  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:05:47.190901  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:47.191067  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 13:05:47.193413  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:47.193723  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:05:47.193744  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:47.193904  688055 provision.go:143] copyHostCerts
	I0923 13:05:47.193956  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 13:05:47.194007  688055 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem, removing ...
	I0923 13:05:47.194028  688055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 13:05:47.194114  688055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem (1082 bytes)
	I0923 13:05:47.194247  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 13:05:47.194275  688055 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem, removing ...
	I0923 13:05:47.194284  688055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 13:05:47.194324  688055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem (1123 bytes)
	I0923 13:05:47.194400  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 13:05:47.194435  688055 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem, removing ...
	I0923 13:05:47.194444  688055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 13:05:47.194478  688055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem (1675 bytes)
	I0923 13:05:47.194546  688055 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem org=jenkins.ha-097312 san=[127.0.0.1 192.168.39.160 ha-097312 localhost minikube]
	I0923 13:05:47.574760  688055 provision.go:177] copyRemoteCerts
	I0923 13:05:47.574841  688055 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 13:05:47.574873  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 13:05:47.578017  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:47.578381  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:05:47.578422  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:47.578653  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 13:05:47.578895  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:05:47.579115  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 13:05:47.579254  688055 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 13:05:47.664339  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 13:05:47.664419  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 13:05:47.693346  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 13:05:47.693424  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0923 13:05:47.718325  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 13:05:47.718418  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 13:05:47.746634  688055 provision.go:87] duration metric: took 559.615125ms to configureAuth
	I0923 13:05:47.746668  688055 buildroot.go:189] setting minikube options for container-runtime
	I0923 13:05:47.746936  688055 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:05:47.747044  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 13:05:47.750584  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:47.751120  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:05:47.751154  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:47.751355  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 13:05:47.751570  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:05:47.751747  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:05:47.751964  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 13:05:47.752155  688055 main.go:141] libmachine: Using SSH client type: native
	I0923 13:05:47.752372  688055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 13:05:47.752390  688055 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 13:07:18.498816  688055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 13:07:18.498846  688055 machine.go:96] duration metric: took 1m31.680804191s to provisionDockerMachine
	I0923 13:07:18.498861  688055 start.go:293] postStartSetup for "ha-097312" (driver="kvm2")
	I0923 13:07:18.498877  688055 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 13:07:18.498901  688055 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 13:07:18.499333  688055 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 13:07:18.499366  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 13:07:18.502894  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:18.503364  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:07:18.503392  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:18.503605  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 13:07:18.503809  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:07:18.503960  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 13:07:18.504118  688055 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 13:07:18.589695  688055 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 13:07:18.594319  688055 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 13:07:18.594355  688055 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/addons for local assets ...
	I0923 13:07:18.594430  688055 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/files for local assets ...
	I0923 13:07:18.594535  688055 filesync.go:149] local asset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> 6694472.pem in /etc/ssl/certs
	I0923 13:07:18.594550  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> /etc/ssl/certs/6694472.pem
	I0923 13:07:18.594645  688055 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 13:07:18.604340  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 13:07:18.630044  688055 start.go:296] duration metric: took 131.165846ms for postStartSetup
	I0923 13:07:18.630098  688055 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 13:07:18.630455  688055 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0923 13:07:18.630495  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 13:07:18.633680  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:18.634256  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:07:18.634297  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:18.634399  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 13:07:18.634680  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:07:18.634838  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 13:07:18.634961  688055 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	W0923 13:07:18.715817  688055 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0923 13:07:18.715843  688055 fix.go:56] duration metric: took 1m31.920709459s for fixHost
	I0923 13:07:18.715872  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 13:07:18.718515  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:18.718870  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:07:18.718908  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:18.719046  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 13:07:18.719268  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:07:18.719471  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:07:18.719615  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 13:07:18.719891  688055 main.go:141] libmachine: Using SSH client type: native
	I0923 13:07:18.720170  688055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 13:07:18.720196  688055 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 13:07:18.826507  688055 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727096838.789442525
	
	I0923 13:07:18.826533  688055 fix.go:216] guest clock: 1727096838.789442525
	I0923 13:07:18.826542  688055 fix.go:229] Guest: 2024-09-23 13:07:18.789442525 +0000 UTC Remote: 2024-09-23 13:07:18.715851736 +0000 UTC m=+92.061293391 (delta=73.590789ms)
	I0923 13:07:18.826595  688055 fix.go:200] guest clock delta is within tolerance: 73.590789ms
	I0923 13:07:18.826603  688055 start.go:83] releasing machines lock for "ha-097312", held for 1m32.031479619s
	I0923 13:07:18.826629  688055 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 13:07:18.826922  688055 main.go:141] libmachine: (ha-097312) Calling .GetIP
	I0923 13:07:18.829600  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:18.830006  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:07:18.830032  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:18.830242  688055 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 13:07:18.830800  688055 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 13:07:18.830973  688055 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 13:07:18.831073  688055 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 13:07:18.831139  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 13:07:18.831174  688055 ssh_runner.go:195] Run: cat /version.json
	I0923 13:07:18.831196  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 13:07:18.833936  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:18.834188  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:18.834466  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:07:18.834493  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:18.834662  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 13:07:18.834757  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:07:18.834784  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:18.834847  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:07:18.834929  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 13:07:18.834999  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 13:07:18.835055  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:07:18.835150  688055 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 13:07:18.835173  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 13:07:18.835316  688055 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 13:07:18.911409  688055 ssh_runner.go:195] Run: systemctl --version
	I0923 13:07:18.955390  688055 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 13:07:19.123007  688055 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 13:07:19.128775  688055 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 13:07:19.128857  688055 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:07:19.137970  688055 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 13:07:19.137995  688055 start.go:495] detecting cgroup driver to use...
	I0923 13:07:19.138078  688055 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 13:07:19.155197  688055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:07:19.169620  688055 docker.go:217] disabling cri-docker service (if available) ...
	I0923 13:07:19.169707  688055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 13:07:19.183861  688055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 13:07:19.198223  688055 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 13:07:19.353120  688055 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 13:07:19.496371  688055 docker.go:233] disabling docker service ...
	I0923 13:07:19.496454  688055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 13:07:19.512315  688055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 13:07:19.525784  688055 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 13:07:19.674143  688055 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 13:07:19.826027  688055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 13:07:19.841490  688055 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:07:19.861706  688055 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 13:07:19.861792  688055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:07:19.872636  688055 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 13:07:19.872726  688055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:07:19.883461  688055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:07:19.894266  688055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:07:19.904936  688055 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 13:07:19.915977  688055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:07:19.926513  688055 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:07:19.938462  688055 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:07:19.948895  688055 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 13:07:19.959121  688055 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 13:07:19.969403  688055 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:07:20.116581  688055 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 13:07:22.553549  688055 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.436893651s)
	I0923 13:07:22.553606  688055 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 13:07:22.553659  688055 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 13:07:22.558509  688055 start.go:563] Will wait 60s for crictl version
	I0923 13:07:22.558587  688055 ssh_runner.go:195] Run: which crictl
	I0923 13:07:22.562331  688055 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 13:07:22.608688  688055 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 13:07:22.608780  688055 ssh_runner.go:195] Run: crio --version
	I0923 13:07:22.636010  688055 ssh_runner.go:195] Run: crio --version
	I0923 13:07:22.666425  688055 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 13:07:22.668395  688055 main.go:141] libmachine: (ha-097312) Calling .GetIP
	I0923 13:07:22.671648  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:22.672113  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:07:22.672135  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:22.672454  688055 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 13:07:22.677484  688055 kubeadm.go:883] updating cluster {Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.20 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 13:07:22.677664  688055 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 13:07:22.677710  688055 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 13:07:22.721704  688055 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 13:07:22.721740  688055 crio.go:433] Images already preloaded, skipping extraction
	I0923 13:07:22.721809  688055 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 13:07:22.756651  688055 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 13:07:22.756689  688055 cache_images.go:84] Images are preloaded, skipping loading
	I0923 13:07:22.756705  688055 kubeadm.go:934] updating node { 192.168.39.160 8443 v1.31.1 crio true true} ...
	I0923 13:07:22.756846  688055 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-097312 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 13:07:22.756921  688055 ssh_runner.go:195] Run: crio config
	I0923 13:07:22.805465  688055 cni.go:84] Creating CNI manager for ""
	I0923 13:07:22.805500  688055 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0923 13:07:22.805516  688055 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 13:07:22.805541  688055 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.160 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-097312 NodeName:ha-097312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.160"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.160 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 13:07:22.805687  688055 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.160
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-097312"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.160
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.160"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 13:07:22.805710  688055 kube-vip.go:115] generating kube-vip config ...
	I0923 13:07:22.805752  688055 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 13:07:22.817117  688055 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 13:07:22.817278  688055 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0923 13:07:22.817357  688055 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 13:07:22.827136  688055 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 13:07:22.827221  688055 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0923 13:07:22.836641  688055 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0923 13:07:22.853973  688055 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 13:07:22.870952  688055 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0923 13:07:22.887848  688055 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0923 13:07:22.905058  688055 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0923 13:07:22.910298  688055 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:07:23.054329  688055 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:07:23.069291  688055 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312 for IP: 192.168.39.160
	I0923 13:07:23.069327  688055 certs.go:194] generating shared ca certs ...
	I0923 13:07:23.069347  688055 certs.go:226] acquiring lock for ca certs: {Name:mk5f47b34d40554f07f6507fea971236e4735d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:07:23.069577  688055 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key
	I0923 13:07:23.069635  688055 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key
	I0923 13:07:23.069647  688055 certs.go:256] generating profile certs ...
	I0923 13:07:23.069805  688055 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.key
	I0923 13:07:23.069864  688055 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.f2bacd8f
	I0923 13:07:23.069884  688055 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.f2bacd8f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.160 192.168.39.214 192.168.39.174 192.168.39.254]
	I0923 13:07:23.560111  688055 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.f2bacd8f ...
	I0923 13:07:23.560148  688055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.f2bacd8f: {Name:mkba1a7ff7fcdf029a4874e87d6a34c95699d0fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:07:23.560336  688055 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.f2bacd8f ...
	I0923 13:07:23.560349  688055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.f2bacd8f: {Name:mkd072123cc33301ff212141ab17814b18bb44e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:07:23.560415  688055 certs.go:381] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.f2bacd8f -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt
	I0923 13:07:23.560606  688055 certs.go:385] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.f2bacd8f -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key
	I0923 13:07:23.560757  688055 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key
	I0923 13:07:23.560774  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 13:07:23.560787  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 13:07:23.560798  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 13:07:23.560819  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 13:07:23.560831  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 13:07:23.560841  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 13:07:23.560854  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 13:07:23.560869  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 13:07:23.560921  688055 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem (1338 bytes)
	W0923 13:07:23.560948  688055 certs.go:480] ignoring /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447_empty.pem, impossibly tiny 0 bytes
	I0923 13:07:23.560957  688055 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 13:07:23.560982  688055 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem (1082 bytes)
	I0923 13:07:23.561003  688055 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem (1123 bytes)
	I0923 13:07:23.561023  688055 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem (1675 bytes)
	I0923 13:07:23.561059  688055 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 13:07:23.561085  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem -> /usr/share/ca-certificates/669447.pem
	I0923 13:07:23.561102  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> /usr/share/ca-certificates/6694472.pem
	I0923 13:07:23.561120  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:07:23.561741  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 13:07:23.600445  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 13:07:23.639005  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 13:07:23.672909  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 13:07:23.698256  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0923 13:07:23.722508  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 13:07:23.749014  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 13:07:23.774998  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 13:07:23.799683  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem --> /usr/share/ca-certificates/669447.pem (1338 bytes)
	I0923 13:07:23.823709  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /usr/share/ca-certificates/6694472.pem (1708 bytes)
	I0923 13:07:23.847584  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 13:07:23.870761  688055 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 13:07:23.888134  688055 ssh_runner.go:195] Run: openssl version
	I0923 13:07:23.893955  688055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6694472.pem && ln -fs /usr/share/ca-certificates/6694472.pem /etc/ssl/certs/6694472.pem"
	I0923 13:07:23.904614  688055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6694472.pem
	I0923 13:07:23.908934  688055 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 12:47 /usr/share/ca-certificates/6694472.pem
	I0923 13:07:23.908998  688055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6694472.pem
	I0923 13:07:23.914676  688055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6694472.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 13:07:23.925108  688055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 13:07:23.936456  688055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:07:23.941455  688055 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 12:28 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:07:23.941528  688055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:07:23.947368  688055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 13:07:23.957758  688055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669447.pem && ln -fs /usr/share/ca-certificates/669447.pem /etc/ssl/certs/669447.pem"
	I0923 13:07:23.969258  688055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669447.pem
	I0923 13:07:23.974108  688055 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 12:47 /usr/share/ca-certificates/669447.pem
	I0923 13:07:23.974202  688055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669447.pem
	I0923 13:07:23.980124  688055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/669447.pem /etc/ssl/certs/51391683.0"
	I0923 13:07:23.990259  688055 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 13:07:23.994977  688055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 13:07:24.001065  688055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 13:07:24.007047  688055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 13:07:24.012797  688055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 13:07:24.018767  688055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 13:07:24.024550  688055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 13:07:24.030569  688055 kubeadm.go:392] StartCluster: {Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.20 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:07:24.030702  688055 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 13:07:24.030752  688055 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 13:07:24.069487  688055 cri.go:89] found id: "7524fbdf9249559c4c6f8270174b2f08e4a4e1df3189f4130ee8c96ca02c3a6f"
	I0923 13:07:24.069519  688055 cri.go:89] found id: "734f0cb5eab507be54ea52dbb406b37e87e0dbd8c959f3c135081aae7fc73520"
	I0923 13:07:24.069524  688055 cri.go:89] found id: "65ce8ee9790645dba54c36e9ba009961df64527fd59e20c866265404b97342ad"
	I0923 13:07:24.069528  688055 cri.go:89] found id: "7322669ed5e0c54ea12545610d9e118abd4651267c1bcf8718d21a45f2a03f5e"
	I0923 13:07:24.069531  688055 cri.go:89] found id: "6494b72ca963ec5a21179322ce5a1a3cd2ecf6063d12290ea8c06659ede25828"
	I0923 13:07:24.069535  688055 cri.go:89] found id: "070d45bce8ff98c35a7d8c06328c902bd260bbcd49c6d8b65acf5f2fe3670f05"
	I0923 13:07:24.069538  688055 cri.go:89] found id: "cead05960724ef0a7c164689c7f077c5173bf75483e09a02ea44bf3b5dde8cab"
	I0923 13:07:24.069541  688055 cri.go:89] found id: "03670fd92c8a80c9d88e88b722428ce8ea7ed15a32a25c8c4c948685c15fe41c"
	I0923 13:07:24.069544  688055 cri.go:89] found id: "37b6ad938698e107c07b01a67dcc4f6f6f2895a6b2ddc7a269056adab117c0ce"
	I0923 13:07:24.069552  688055 cri.go:89] found id: "e5095373416a8e45324449515c2fa18882a4b643648236860681c27f7f589bdb"
	I0923 13:07:24.069572  688055 cri.go:89] found id: "9bfbdbe2c35f63b185f28992c717601392287e693216d7332cfd0b4b6597c8ad"
	I0923 13:07:24.069575  688055 cri.go:89] found id: "5c9e8fb5e944bc800446956248067c039e5c452de2651adf100841c5f062a431"
	I0923 13:07:24.069578  688055 cri.go:89] found id: "1c28bf3f4d80d4048804c687d1cec38aff92ff01ac7556fbe59fd2c73324b333"
	I0923 13:07:24.069580  688055 cri.go:89] found id: "476ad705f89683694506883a4ac379c2339d6097875e3a88c66a078cec041492"
	I0923 13:07:24.069586  688055 cri.go:89] found id: ""
	I0923 13:07:24.069646  688055 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 23 13:10:20 ha-097312 crio[3615]: time="2024-09-23 13:10:20.177267961Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:91947e1a82d06b511d7c18ef8debffb602cc4a5086f7adf39c515c6c7780dfe4,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-4rksx,Uid:378f72ef-8447-411d-a70b-bb355788eff4,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727096883276087797,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T12:59:43.407114849Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:62c7b6d1bfb3fcf355ef1262dbf0a6981964acecb49c05105bf7368bae4ee0f2,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-097312,Uid:439831b6eefde7ddc923373d885892d5,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1727096862008250687,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 439831b6eefde7ddc923373d885892d5,},Annotations:map[string]string{kubernetes.io/config.hash: 439831b6eefde7ddc923373d885892d5,kubernetes.io/config.seen: 2024-09-23T13:07:22.869119624Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:757baf6c9026cc6a6c35376a447df19610b1c547d345e66b3943e826e53d744b,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-txcxz,Uid:e6da5f25-f232-4649-9801-f3577210ea2e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727096849586869808,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09
-23T12:57:19.874069764Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1208cacfef830900c03332e3e25064f9922051e5f615eed5f353e9839bca7a0e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0bbda806-091c-4e48-982a-296bbf03abd6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727096849584464361,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":
\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-23T12:57:19.876497413Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5647b2efd9d715c3975ffa772999aea10dddd8c0ef929e1079aa12a7c3743c83,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-097312,Uid:b7606555cae3af30f14e539fb18c319e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727096849568721438,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-ap
iserver.advertise-address.endpoint: 192.168.39.160:8443,kubernetes.io/config.hash: b7606555cae3af30f14e539fb18c319e,kubernetes.io/config.seen: 2024-09-23T12:57:02.061858876Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:694418cca7eb400f2a4cb270d0a9f891885c67d0bff9eeba86619473c970f3d6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-097312,Uid:e691f4013a742318fc23cd46bae362e8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727096849565773791,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e691f4013a742318fc23cd46bae362e8,kubernetes.io/config.seen: 2024-09-23T12:57:02.061860853Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dd9ae27e8638bebbdcb54e62019125f61b83446a30ad75f1e242f56744544025,M
etadata:&PodSandboxMetadata{Name:kube-proxy-drj8m,Uid:a1c5535e-7139-441f-9065-ef7d147582d2,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727096849564467871,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T12:57:06.860001224Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bcf3a61d4cd3d9b55ee351f4c648a07b5efa211abada5ab0831b1bee698ab227,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-097312,Uid:415ae4cebae57e6b1ebe046e97e7cb98,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727096849562522203,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-097312,
io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 415ae4cebae57e6b1ebe046e97e7cb98,kubernetes.io/config.seen: 2024-09-23T12:57:02.061859905Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c1b4e30bea3c71aa0ef8865f692497339ab86e89f3a99aa3e70cd62bf3002a45,Metadata:&PodSandboxMetadata{Name:etcd-ha-097312,Uid:f7a4f10af129576cf98e9295b3acebd8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727096849558871750,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.160:2379,kubernetes.io/config.hash: f7a4f10af129576cf98e9295b3acebd8,kubernetes.io/config.seen: 2024-09-23T12:57:02.06185
4560Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:190c2732e8a28ab3ed97d3ff86abb432daa7bfa8bfbeed10c4fbb8fea19647cb,Metadata:&PodSandboxMetadata{Name:kindnet-j8l5t,Uid:49216705-6e85-4b98-afbd-f4228b774321,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727096849548591679,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T12:57:06.850891940Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:10a49ceedb6d63dfd25b55150d5d26085608a48663bd0221079001a1cea652a0,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-6g9x2,Uid:af485e47-0e78-483e-8f35-a7a4ab53f014,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727096843385772943,L
abels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e78-483e-8f35-a7a4ab53f014,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T12:57:19.864542176Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:01a99cef826dda6f2b65d379c041e96505aa2085b58dd4630a3ae2c0052d503b,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-4rksx,Uid:378f72ef-8447-411d-a70b-bb355788eff4,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727096383717992361,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T12:59:43.407114849Z,kubernetes.io/config.source
: api,},RuntimeHandler:,},&PodSandbox{Id:09f40d2b506132af296453dc4125d2ff70d789a87f1da351ae25a90c863e1c5b,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-txcxz,Uid:e6da5f25-f232-4649-9801-f3577210ea2e,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727096240196414444,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T12:57:19.874069764Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d6346e81a93e3ab149256d0f37fd69af6c44f91e6e6662b3720a7bd343554d66,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-6g9x2,Uid:af485e47-0e78-483e-8f35-a7a4ab53f014,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727096240174094139,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e78-483e-8f35-a7a4ab53f014,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T12:57:19.864542176Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8efd7c52e41eb6dd5b30df6dc0b133cb2ffabe08abf473da0e79edcf137bc745,Metadata:&PodSandboxMetadata{Name:kube-proxy-drj8m,Uid:a1c5535e-7139-441f-9065-ef7d147582d2,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727096228073441361,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T12:57:06.860001224Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&
PodSandbox{Id:fa074de98ab0bb7558595bb7900fab097f2fa4cf091ae0c9ed5fd5c899cc2044,Metadata:&PodSandboxMetadata{Name:kindnet-j8l5t,Uid:49216705-6e85-4b98-afbd-f4228b774321,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727096228059154715,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T12:57:06.850891940Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:46a49b5018b58cc60ab2c080f685d00c187e33e4c7790af775ed5baf71aefdca,Metadata:&PodSandboxMetadata{Name:etcd-ha-097312,Uid:f7a4f10af129576cf98e9295b3acebd8,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727096215369158829,Labels:map[string]string{component: etcd,io.kubernetes.container.name:
POD,io.kubernetes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.160:2379,kubernetes.io/config.hash: f7a4f10af129576cf98e9295b3acebd8,kubernetes.io/config.seen: 2024-09-23T12:56:54.898254013Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e4cdc1cb583f42c1cf64e136ebe20075107963fc13da9144c568b67897e7e8a0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-097312,Uid:e691f4013a742318fc23cd46bae362e8,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727096215357532129,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e691f401
3a742318fc23cd46bae362e8,kubernetes.io/config.seen: 2024-09-23T12:56:54.898252241Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=13fd71e5-b8ec-458f-922f-ead4f9de7731 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 23 13:10:20 ha-097312 crio[3615]: time="2024-09-23 13:10:20.178533639Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fab43e94-958a-4237-b937-b71c7531f957 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:10:20 ha-097312 crio[3615]: time="2024-09-23 13:10:20.179385019Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fab43e94-958a-4237-b937-b71c7531f957 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:10:20 ha-097312 crio[3615]: time="2024-09-23 13:10:20.179942952Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d27e0a2d3851698bf14a74ab80a5aa5c92e2b29d0e3e5daf878fedaa77a028b,PodSandboxId:1208cacfef830900c03332e3e25064f9922051e5f615eed5f353e9839bca7a0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727096926161834491,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:327bfbcf6b79a16f1a5d0c94377815fc5fbb5bc82e544e68403e7ec0e90448e8,PodSandboxId:bcf3a61d4cd3d9b55ee351f4c648a07b5efa211abada5ab0831b1bee698ab227,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727096897149733438,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f3d21af63b5abeda02040f268e6cc8e42b9f5c0d833e8e462586290e7f1d4c6,PodSandboxId:5647b2efd9d715c3975ffa772999aea10dddd8c0ef929e1079aa12a7c3743c83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727096885162755257,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e91ef888f8ec955bef6dc9006e9b04cd7f0c520501780bf227a51838b9b055d5,PodSandboxId:1208cacfef830900c03332e3e25064f9922051e5f615eed5f353e9839bca7a0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727096884150521741,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5776f6ad95118b2c81dead9b92b71a822195a4bef5adbf5871dcef1697e6d5a6,PodSandboxId:91947e1a82d06b511d7c18ef8debffb602cc4a5086f7adf39c515c6c7780dfe4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727096883437498706,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31662037f5b073826dbc31fa11734016648662a603866125155e58446d4c73fe,PodSandboxId:62c7b6d1bfb3fcf355ef1262dbf0a6981964acecb49c05105bf7368bae4ee0f2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727096862113200841,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 439831b6eefde7ddc923373d885892d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6527aca4afc7a32189ae29950d639e34564886f591210b00866727f72ecf2617,PodSandboxId:dd9ae27e8638bebbdcb54e62019125f61b83446a30ad75f1e242f56744544025,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727096850427518493,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:fddebc96422aa20750bd4deb2fa7a71b609a0b73820282e5572365906bad733d,PodSandboxId:190c2732e8a28ab3ed97d3ff86abb432daa7bfa8bfbeed10c4fbb8fea19647cb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727096850135206176,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcc4e39b
b9f3f77cad4a321dfd137c68026104431291fca9781d2bc69c01eda2,PodSandboxId:757baf6c9026cc6a6c35376a447df19610b1c547d345e66b3943e826e53d744b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096850108299496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0267858752e0468b892f98165ea7b1e17a2afde6ca05faccacf5ab35984ae965,PodSandboxId:694418cca7eb400f2a4cb270d0a9f891885c67d0bff9eeba86619473c970f3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727096850189041185,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c4427bd859a561476ab926494c2f6a0c2babe6ae47fd7b07d495a1ccb47adbb,PodSandboxId:c1b4e30bea3c71aa0ef8865f692497339ab86e89f3a99aa3e70cd62bf3002a45,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727096850027124073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:063ee6e5cb4852bcf99e81658a40b6a882427bed47d9cff993d0a1d51f047fab,PodSandboxId:bcf3a61d4cd3d9b55ee351f4c648a07b5efa211abada5ab0831b1bee698ab227,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727096849904481582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bb38d637855f4ef3d09f67d9d173fa2f585dfbf0fd48555c4d80a36d7a8096,PodSandboxId:5647b2efd9d715c3975ffa772999aea10dddd8c0ef929e1079aa12a7c3743c83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727096849802494969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7524fbdf9249559c4c6f8270174b2f08e4a4e1df3189f4130ee8c96ca02c3a6f,PodSandboxId:10a49ceedb6d63dfd25b55150d5d26085608a48663bd0221079001a1cea652a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096843509165688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c8b3d3e1c9604dd8d7d45c15c2a91a759a62f04a047e5626d57a757a396bd4b,PodSandboxId:01a99cef826dda6f2b65d379c041e96505aa2085b58dd4630a3ae2c0052d503b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727096387328890914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6494b72ca963ec5a21179322ce5a1a3cd2ecf6063d12290ea8c06659ede25828,PodSandboxId:09f40d2b506132af296453dc4125d2ff70d789a87f1da351ae25a90c863e1c5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727096240450572434,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cead05960724ef0a7c164689c7f077c5173bf75483e09a02ea44bf3b5dde8cab,PodSandboxId:d6346e81a93e3ab149256d0f37fd69af6c44f91e6e6662b3720a7bd343554d66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727096240372228978,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03670fd92c8a80c9d88e88b722428ce8ea7ed15a32a25c8c4c948685c15fe41c,PodSandboxId:fa074de98ab0bb7558595bb7900fab097f2fa4cf091ae0c9ed5fd5c899cc2044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727096228373784218,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b6ad938698e107c07b01a67dcc4f6f6f2895a6b2ddc7a269056adab117c0ce,PodSandboxId:8efd7c52e41eb6dd5b30df6dc0b133cb2ffabe08abf473da0e79edcf137bc745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727096228199442884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfbdbe2c35f63b185f28992c717601392287e693216d7332cfd0b4b6597c8ad,PodSandboxId:46a49b5018b58cc60ab2c080f685d00c187e33e4c7790af775ed5baf71aefdca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727096215629506566,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c9e8fb5e944bc800446956248067c039e5c452de2651adf100841c5f062a431,PodSandboxId:e4cdc1cb583f42c1cf64e136ebe20075107963fc13da9144c568b67897e7e8a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1727096215612604563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fab43e94-958a-4237-b937-b71c7531f957 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:10:20 ha-097312 crio[3615]: time="2024-09-23 13:10:20.233981739Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=76a90193-c631-4bd0-a58c-14c2ebfd3347 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:10:20 ha-097312 crio[3615]: time="2024-09-23 13:10:20.234080019Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=76a90193-c631-4bd0-a58c-14c2ebfd3347 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:10:20 ha-097312 crio[3615]: time="2024-09-23 13:10:20.236549147Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1f490df2-168f-45c2-9ac3-7cdef22525b1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:10:20 ha-097312 crio[3615]: time="2024-09-23 13:10:20.237343345Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097020237302346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1f490df2-168f-45c2-9ac3-7cdef22525b1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:10:20 ha-097312 crio[3615]: time="2024-09-23 13:10:20.238415678Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=befa76de-90b2-4f79-9876-d7d7e48ffc44 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:10:20 ha-097312 crio[3615]: time="2024-09-23 13:10:20.238690347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=befa76de-90b2-4f79-9876-d7d7e48ffc44 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:10:20 ha-097312 crio[3615]: time="2024-09-23 13:10:20.239275566Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d27e0a2d3851698bf14a74ab80a5aa5c92e2b29d0e3e5daf878fedaa77a028b,PodSandboxId:1208cacfef830900c03332e3e25064f9922051e5f615eed5f353e9839bca7a0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727096926161834491,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:327bfbcf6b79a16f1a5d0c94377815fc5fbb5bc82e544e68403e7ec0e90448e8,PodSandboxId:bcf3a61d4cd3d9b55ee351f4c648a07b5efa211abada5ab0831b1bee698ab227,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727096897149733438,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f3d21af63b5abeda02040f268e6cc8e42b9f5c0d833e8e462586290e7f1d4c6,PodSandboxId:5647b2efd9d715c3975ffa772999aea10dddd8c0ef929e1079aa12a7c3743c83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727096885162755257,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e91ef888f8ec955bef6dc9006e9b04cd7f0c520501780bf227a51838b9b055d5,PodSandboxId:1208cacfef830900c03332e3e25064f9922051e5f615eed5f353e9839bca7a0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727096884150521741,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5776f6ad95118b2c81dead9b92b71a822195a4bef5adbf5871dcef1697e6d5a6,PodSandboxId:91947e1a82d06b511d7c18ef8debffb602cc4a5086f7adf39c515c6c7780dfe4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727096883437498706,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31662037f5b073826dbc31fa11734016648662a603866125155e58446d4c73fe,PodSandboxId:62c7b6d1bfb3fcf355ef1262dbf0a6981964acecb49c05105bf7368bae4ee0f2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727096862113200841,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 439831b6eefde7ddc923373d885892d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6527aca4afc7a32189ae29950d639e34564886f591210b00866727f72ecf2617,PodSandboxId:dd9ae27e8638bebbdcb54e62019125f61b83446a30ad75f1e242f56744544025,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727096850427518493,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:fddebc96422aa20750bd4deb2fa7a71b609a0b73820282e5572365906bad733d,PodSandboxId:190c2732e8a28ab3ed97d3ff86abb432daa7bfa8bfbeed10c4fbb8fea19647cb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727096850135206176,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcc4e39b
b9f3f77cad4a321dfd137c68026104431291fca9781d2bc69c01eda2,PodSandboxId:757baf6c9026cc6a6c35376a447df19610b1c547d345e66b3943e826e53d744b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096850108299496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0267858752e0468b892f98165ea7b1e17a2afde6ca05faccacf5ab35984ae965,PodSandboxId:694418cca7eb400f2a4cb270d0a9f891885c67d0bff9eeba86619473c970f3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727096850189041185,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c4427bd859a561476ab926494c2f6a0c2babe6ae47fd7b07d495a1ccb47adbb,PodSandboxId:c1b4e30bea3c71aa0ef8865f692497339ab86e89f3a99aa3e70cd62bf3002a45,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727096850027124073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:063ee6e5cb4852bcf99e81658a40b6a882427bed47d9cff993d0a1d51f047fab,PodSandboxId:bcf3a61d4cd3d9b55ee351f4c648a07b5efa211abada5ab0831b1bee698ab227,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727096849904481582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bb38d637855f4ef3d09f67d9d173fa2f585dfbf0fd48555c4d80a36d7a8096,PodSandboxId:5647b2efd9d715c3975ffa772999aea10dddd8c0ef929e1079aa12a7c3743c83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727096849802494969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7524fbdf9249559c4c6f8270174b2f08e4a4e1df3189f4130ee8c96ca02c3a6f,PodSandboxId:10a49ceedb6d63dfd25b55150d5d26085608a48663bd0221079001a1cea652a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096843509165688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c8b3d3e1c9604dd8d7d45c15c2a91a759a62f04a047e5626d57a757a396bd4b,PodSandboxId:01a99cef826dda6f2b65d379c041e96505aa2085b58dd4630a3ae2c0052d503b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727096387328890914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6494b72ca963ec5a21179322ce5a1a3cd2ecf6063d12290ea8c06659ede25828,PodSandboxId:09f40d2b506132af296453dc4125d2ff70d789a87f1da351ae25a90c863e1c5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727096240450572434,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cead05960724ef0a7c164689c7f077c5173bf75483e09a02ea44bf3b5dde8cab,PodSandboxId:d6346e81a93e3ab149256d0f37fd69af6c44f91e6e6662b3720a7bd343554d66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727096240372228978,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03670fd92c8a80c9d88e88b722428ce8ea7ed15a32a25c8c4c948685c15fe41c,PodSandboxId:fa074de98ab0bb7558595bb7900fab097f2fa4cf091ae0c9ed5fd5c899cc2044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727096228373784218,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b6ad938698e107c07b01a67dcc4f6f6f2895a6b2ddc7a269056adab117c0ce,PodSandboxId:8efd7c52e41eb6dd5b30df6dc0b133cb2ffabe08abf473da0e79edcf137bc745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727096228199442884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfbdbe2c35f63b185f28992c717601392287e693216d7332cfd0b4b6597c8ad,PodSandboxId:46a49b5018b58cc60ab2c080f685d00c187e33e4c7790af775ed5baf71aefdca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727096215629506566,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c9e8fb5e944bc800446956248067c039e5c452de2651adf100841c5f062a431,PodSandboxId:e4cdc1cb583f42c1cf64e136ebe20075107963fc13da9144c568b67897e7e8a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1727096215612604563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=befa76de-90b2-4f79-9876-d7d7e48ffc44 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:10:20 ha-097312 crio[3615]: time="2024-09-23 13:10:20.303890672Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf569bda-a0ae-4458-a559-59f50c0a7142 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:10:20 ha-097312 crio[3615]: time="2024-09-23 13:10:20.304011793Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf569bda-a0ae-4458-a559-59f50c0a7142 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:10:20 ha-097312 crio[3615]: time="2024-09-23 13:10:20.305326048Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d181e0d3-39d9-4c04-9550-e4c6bb77faad name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:10:20 ha-097312 crio[3615]: time="2024-09-23 13:10:20.306149583Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097020306113216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d181e0d3-39d9-4c04-9550-e4c6bb77faad name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:10:20 ha-097312 crio[3615]: time="2024-09-23 13:10:20.306952425Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cde445a4-dcad-4a0d-bb2e-4b52069cf549 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:10:20 ha-097312 crio[3615]: time="2024-09-23 13:10:20.307051264Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cde445a4-dcad-4a0d-bb2e-4b52069cf549 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:10:20 ha-097312 crio[3615]: time="2024-09-23 13:10:20.307734491Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d27e0a2d3851698bf14a74ab80a5aa5c92e2b29d0e3e5daf878fedaa77a028b,PodSandboxId:1208cacfef830900c03332e3e25064f9922051e5f615eed5f353e9839bca7a0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727096926161834491,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:327bfbcf6b79a16f1a5d0c94377815fc5fbb5bc82e544e68403e7ec0e90448e8,PodSandboxId:bcf3a61d4cd3d9b55ee351f4c648a07b5efa211abada5ab0831b1bee698ab227,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727096897149733438,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f3d21af63b5abeda02040f268e6cc8e42b9f5c0d833e8e462586290e7f1d4c6,PodSandboxId:5647b2efd9d715c3975ffa772999aea10dddd8c0ef929e1079aa12a7c3743c83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727096885162755257,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e91ef888f8ec955bef6dc9006e9b04cd7f0c520501780bf227a51838b9b055d5,PodSandboxId:1208cacfef830900c03332e3e25064f9922051e5f615eed5f353e9839bca7a0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727096884150521741,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5776f6ad95118b2c81dead9b92b71a822195a4bef5adbf5871dcef1697e6d5a6,PodSandboxId:91947e1a82d06b511d7c18ef8debffb602cc4a5086f7adf39c515c6c7780dfe4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727096883437498706,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31662037f5b073826dbc31fa11734016648662a603866125155e58446d4c73fe,PodSandboxId:62c7b6d1bfb3fcf355ef1262dbf0a6981964acecb49c05105bf7368bae4ee0f2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727096862113200841,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 439831b6eefde7ddc923373d885892d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6527aca4afc7a32189ae29950d639e34564886f591210b00866727f72ecf2617,PodSandboxId:dd9ae27e8638bebbdcb54e62019125f61b83446a30ad75f1e242f56744544025,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727096850427518493,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:fddebc96422aa20750bd4deb2fa7a71b609a0b73820282e5572365906bad733d,PodSandboxId:190c2732e8a28ab3ed97d3ff86abb432daa7bfa8bfbeed10c4fbb8fea19647cb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727096850135206176,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcc4e39b
b9f3f77cad4a321dfd137c68026104431291fca9781d2bc69c01eda2,PodSandboxId:757baf6c9026cc6a6c35376a447df19610b1c547d345e66b3943e826e53d744b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096850108299496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0267858752e0468b892f98165ea7b1e17a2afde6ca05faccacf5ab35984ae965,PodSandboxId:694418cca7eb400f2a4cb270d0a9f891885c67d0bff9eeba86619473c970f3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727096850189041185,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c4427bd859a561476ab926494c2f6a0c2babe6ae47fd7b07d495a1ccb47adbb,PodSandboxId:c1b4e30bea3c71aa0ef8865f692497339ab86e89f3a99aa3e70cd62bf3002a45,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727096850027124073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:063ee6e5cb4852bcf99e81658a40b6a882427bed47d9cff993d0a1d51f047fab,PodSandboxId:bcf3a61d4cd3d9b55ee351f4c648a07b5efa211abada5ab0831b1bee698ab227,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727096849904481582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bb38d637855f4ef3d09f67d9d173fa2f585dfbf0fd48555c4d80a36d7a8096,PodSandboxId:5647b2efd9d715c3975ffa772999aea10dddd8c0ef929e1079aa12a7c3743c83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727096849802494969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7524fbdf9249559c4c6f8270174b2f08e4a4e1df3189f4130ee8c96ca02c3a6f,PodSandboxId:10a49ceedb6d63dfd25b55150d5d26085608a48663bd0221079001a1cea652a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096843509165688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c8b3d3e1c9604dd8d7d45c15c2a91a759a62f04a047e5626d57a757a396bd4b,PodSandboxId:01a99cef826dda6f2b65d379c041e96505aa2085b58dd4630a3ae2c0052d503b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727096387328890914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6494b72ca963ec5a21179322ce5a1a3cd2ecf6063d12290ea8c06659ede25828,PodSandboxId:09f40d2b506132af296453dc4125d2ff70d789a87f1da351ae25a90c863e1c5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727096240450572434,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cead05960724ef0a7c164689c7f077c5173bf75483e09a02ea44bf3b5dde8cab,PodSandboxId:d6346e81a93e3ab149256d0f37fd69af6c44f91e6e6662b3720a7bd343554d66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727096240372228978,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03670fd92c8a80c9d88e88b722428ce8ea7ed15a32a25c8c4c948685c15fe41c,PodSandboxId:fa074de98ab0bb7558595bb7900fab097f2fa4cf091ae0c9ed5fd5c899cc2044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727096228373784218,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b6ad938698e107c07b01a67dcc4f6f6f2895a6b2ddc7a269056adab117c0ce,PodSandboxId:8efd7c52e41eb6dd5b30df6dc0b133cb2ffabe08abf473da0e79edcf137bc745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727096228199442884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfbdbe2c35f63b185f28992c717601392287e693216d7332cfd0b4b6597c8ad,PodSandboxId:46a49b5018b58cc60ab2c080f685d00c187e33e4c7790af775ed5baf71aefdca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727096215629506566,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c9e8fb5e944bc800446956248067c039e5c452de2651adf100841c5f062a431,PodSandboxId:e4cdc1cb583f42c1cf64e136ebe20075107963fc13da9144c568b67897e7e8a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1727096215612604563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cde445a4-dcad-4a0d-bb2e-4b52069cf549 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:10:20 ha-097312 crio[3615]: time="2024-09-23 13:10:20.361586041Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b4236f41-74c1-4993-8695-1403340d5acb name=/runtime.v1.RuntimeService/Version
	Sep 23 13:10:20 ha-097312 crio[3615]: time="2024-09-23 13:10:20.361728341Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b4236f41-74c1-4993-8695-1403340d5acb name=/runtime.v1.RuntimeService/Version
	Sep 23 13:10:20 ha-097312 crio[3615]: time="2024-09-23 13:10:20.363395687Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0815580d-b79b-4a81-bbe0-62f74e42031a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:10:20 ha-097312 crio[3615]: time="2024-09-23 13:10:20.364245520Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097020364209519,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0815580d-b79b-4a81-bbe0-62f74e42031a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:10:20 ha-097312 crio[3615]: time="2024-09-23 13:10:20.365006564Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9976b23b-1f4c-4996-8ccc-204fecfe09ec name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:10:20 ha-097312 crio[3615]: time="2024-09-23 13:10:20.365109568Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9976b23b-1f4c-4996-8ccc-204fecfe09ec name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:10:20 ha-097312 crio[3615]: time="2024-09-23 13:10:20.365773774Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d27e0a2d3851698bf14a74ab80a5aa5c92e2b29d0e3e5daf878fedaa77a028b,PodSandboxId:1208cacfef830900c03332e3e25064f9922051e5f615eed5f353e9839bca7a0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727096926161834491,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:327bfbcf6b79a16f1a5d0c94377815fc5fbb5bc82e544e68403e7ec0e90448e8,PodSandboxId:bcf3a61d4cd3d9b55ee351f4c648a07b5efa211abada5ab0831b1bee698ab227,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727096897149733438,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f3d21af63b5abeda02040f268e6cc8e42b9f5c0d833e8e462586290e7f1d4c6,PodSandboxId:5647b2efd9d715c3975ffa772999aea10dddd8c0ef929e1079aa12a7c3743c83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727096885162755257,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e91ef888f8ec955bef6dc9006e9b04cd7f0c520501780bf227a51838b9b055d5,PodSandboxId:1208cacfef830900c03332e3e25064f9922051e5f615eed5f353e9839bca7a0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727096884150521741,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5776f6ad95118b2c81dead9b92b71a822195a4bef5adbf5871dcef1697e6d5a6,PodSandboxId:91947e1a82d06b511d7c18ef8debffb602cc4a5086f7adf39c515c6c7780dfe4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727096883437498706,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31662037f5b073826dbc31fa11734016648662a603866125155e58446d4c73fe,PodSandboxId:62c7b6d1bfb3fcf355ef1262dbf0a6981964acecb49c05105bf7368bae4ee0f2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727096862113200841,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 439831b6eefde7ddc923373d885892d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6527aca4afc7a32189ae29950d639e34564886f591210b00866727f72ecf2617,PodSandboxId:dd9ae27e8638bebbdcb54e62019125f61b83446a30ad75f1e242f56744544025,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727096850427518493,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:fddebc96422aa20750bd4deb2fa7a71b609a0b73820282e5572365906bad733d,PodSandboxId:190c2732e8a28ab3ed97d3ff86abb432daa7bfa8bfbeed10c4fbb8fea19647cb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727096850135206176,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcc4e39b
b9f3f77cad4a321dfd137c68026104431291fca9781d2bc69c01eda2,PodSandboxId:757baf6c9026cc6a6c35376a447df19610b1c547d345e66b3943e826e53d744b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096850108299496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0267858752e0468b892f98165ea7b1e17a2afde6ca05faccacf5ab35984ae965,PodSandboxId:694418cca7eb400f2a4cb270d0a9f891885c67d0bff9eeba86619473c970f3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727096850189041185,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c4427bd859a561476ab926494c2f6a0c2babe6ae47fd7b07d495a1ccb47adbb,PodSandboxId:c1b4e30bea3c71aa0ef8865f692497339ab86e89f3a99aa3e70cd62bf3002a45,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727096850027124073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:063ee6e5cb4852bcf99e81658a40b6a882427bed47d9cff993d0a1d51f047fab,PodSandboxId:bcf3a61d4cd3d9b55ee351f4c648a07b5efa211abada5ab0831b1bee698ab227,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727096849904481582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bb38d637855f4ef3d09f67d9d173fa2f585dfbf0fd48555c4d80a36d7a8096,PodSandboxId:5647b2efd9d715c3975ffa772999aea10dddd8c0ef929e1079aa12a7c3743c83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727096849802494969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7524fbdf9249559c4c6f8270174b2f08e4a4e1df3189f4130ee8c96ca02c3a6f,PodSandboxId:10a49ceedb6d63dfd25b55150d5d26085608a48663bd0221079001a1cea652a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096843509165688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c8b3d3e1c9604dd8d7d45c15c2a91a759a62f04a047e5626d57a757a396bd4b,PodSandboxId:01a99cef826dda6f2b65d379c041e96505aa2085b58dd4630a3ae2c0052d503b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727096387328890914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6494b72ca963ec5a21179322ce5a1a3cd2ecf6063d12290ea8c06659ede25828,PodSandboxId:09f40d2b506132af296453dc4125d2ff70d789a87f1da351ae25a90c863e1c5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727096240450572434,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cead05960724ef0a7c164689c7f077c5173bf75483e09a02ea44bf3b5dde8cab,PodSandboxId:d6346e81a93e3ab149256d0f37fd69af6c44f91e6e6662b3720a7bd343554d66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727096240372228978,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03670fd92c8a80c9d88e88b722428ce8ea7ed15a32a25c8c4c948685c15fe41c,PodSandboxId:fa074de98ab0bb7558595bb7900fab097f2fa4cf091ae0c9ed5fd5c899cc2044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727096228373784218,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b6ad938698e107c07b01a67dcc4f6f6f2895a6b2ddc7a269056adab117c0ce,PodSandboxId:8efd7c52e41eb6dd5b30df6dc0b133cb2ffabe08abf473da0e79edcf137bc745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727096228199442884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfbdbe2c35f63b185f28992c717601392287e693216d7332cfd0b4b6597c8ad,PodSandboxId:46a49b5018b58cc60ab2c080f685d00c187e33e4c7790af775ed5baf71aefdca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727096215629506566,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c9e8fb5e944bc800446956248067c039e5c452de2651adf100841c5f062a431,PodSandboxId:e4cdc1cb583f42c1cf64e136ebe20075107963fc13da9144c568b67897e7e8a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1727096215612604563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9976b23b-1f4c-4996-8ccc-204fecfe09ec name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7d27e0a2d3851       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   1208cacfef830       storage-provisioner
	327bfbcf6b79a       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago        Running             kube-controller-manager   2                   bcf3a61d4cd3d       kube-controller-manager-ha-097312
	1f3d21af63b5a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Running             kube-apiserver            3                   5647b2efd9d71       kube-apiserver-ha-097312
	e91ef888f8ec9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   1208cacfef830       storage-provisioner
	5776f6ad95118       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   91947e1a82d06       busybox-7dff88458-4rksx
	31662037f5b07       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   62c7b6d1bfb3f       kube-vip-ha-097312
	6527aca4afc7a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      2 minutes ago        Running             kube-proxy                1                   dd9ae27e8638b       kube-proxy-drj8m
	0267858752e04       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      2 minutes ago        Running             kube-scheduler            1                   694418cca7eb4       kube-scheduler-ha-097312
	fddebc96422aa       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   190c2732e8a28       kindnet-j8l5t
	bcc4e39bb9f3f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   757baf6c9026c       coredns-7c65d6cfc9-txcxz
	1c4427bd859a5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   c1b4e30bea3c7       etcd-ha-097312
	063ee6e5cb485       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago        Exited              kube-controller-manager   1                   bcf3a61d4cd3d       kube-controller-manager-ha-097312
	f3bb38d637855       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Exited              kube-apiserver            2                   5647b2efd9d71       kube-apiserver-ha-097312
	7524fbdf92495       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   10a49ceedb6d6       coredns-7c65d6cfc9-6g9x2
	0c8b3d3e1c960       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   01a99cef826dd       busybox-7dff88458-4rksx
	6494b72ca963e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago       Exited              coredns                   0                   09f40d2b50613       coredns-7c65d6cfc9-txcxz
	cead05960724e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago       Exited              coredns                   0                   d6346e81a93e3       coredns-7c65d6cfc9-6g9x2
	03670fd92c8a8       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      13 minutes ago       Exited              kindnet-cni               0                   fa074de98ab0b       kindnet-j8l5t
	37b6ad938698e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago       Exited              kube-proxy                0                   8efd7c52e41eb       kube-proxy-drj8m
	9bfbdbe2c35f6       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago       Exited              etcd                      0                   46a49b5018b58       etcd-ha-097312
	5c9e8fb5e944b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago       Exited              kube-scheduler            0                   e4cdc1cb583f4       kube-scheduler-ha-097312
	
	
	==> coredns [6494b72ca963ec5a21179322ce5a1a3cd2ecf6063d12290ea8c06659ede25828] <==
	[INFO] 10.244.0.4:56395 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000159124s
	[INFO] 10.244.2.2:48128 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168767s
	[INFO] 10.244.2.2:38686 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001366329s
	[INFO] 10.244.2.2:54280 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098386s
	[INFO] 10.244.2.2:36178 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083893s
	[INFO] 10.244.1.2:36479 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151724s
	[INFO] 10.244.1.2:52581 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000183399s
	[INFO] 10.244.1.2:36358 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00015472s
	[INFO] 10.244.0.4:37418 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198313s
	[INFO] 10.244.2.2:52660 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011216s
	[INFO] 10.244.1.2:33460 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123493s
	[INFO] 10.244.1.2:42619 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000187646s
	[INFO] 10.244.0.4:50282 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110854s
	[INFO] 10.244.0.4:48865 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000169177s
	[INFO] 10.244.0.4:52671 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110814s
	[INFO] 10.244.2.2:49013 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000236486s
	[INFO] 10.244.2.2:37600 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000236051s
	[INFO] 10.244.2.2:54687 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000137539s
	[INFO] 10.244.1.2:37754 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000237319s
	[INFO] 10.244.1.2:50571 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000167449s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1747&timeout=6m52s&timeoutSeconds=412&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	
	
	==> coredns [7524fbdf9249559c4c6f8270174b2f08e4a4e1df3189f4130ee8c96ca02c3a6f] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1105559606]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (23-Sep-2024 13:07:34.664) (total time: 10000ms):
	Trace[1105559606]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (13:07:44.664)
	Trace[1105559606]: [10.00085903s] [10.00085903s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[342942772]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (23-Sep-2024 13:07:39.227) (total time: 10001ms):
	Trace[342942772]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:07:49.229)
	Trace[342942772]: [10.001532009s] [10.001532009s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [bcc4e39bb9f3f77cad4a321dfd137c68026104431291fca9781d2bc69c01eda2] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:44944->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:44944->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:44928->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:44928->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:44926->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1508766833]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (23-Sep-2024 13:07:43.940) (total time: 10664ms):
	Trace[1508766833]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:44926->10.96.0.1:443: read: connection reset by peer 10663ms (13:07:54.604)
	Trace[1508766833]: [10.664217655s] [10.664217655s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:44926->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [cead05960724ef0a7c164689c7f077c5173bf75483e09a02ea44bf3b5dde8cab] <==
	[INFO] 10.244.2.2:57929 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002002291s
	[INFO] 10.244.2.2:39920 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000241567s
	[INFO] 10.244.2.2:40496 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084082s
	[INFO] 10.244.1.2:53956 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001953841s
	[INFO] 10.244.1.2:39693 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161735s
	[INFO] 10.244.1.2:59255 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001392042s
	[INFO] 10.244.1.2:33162 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000137674s
	[INFO] 10.244.1.2:56819 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135224s
	[INFO] 10.244.0.4:58065 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142108s
	[INFO] 10.244.0.4:49950 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114547s
	[INFO] 10.244.0.4:48467 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051186s
	[INFO] 10.244.2.2:57485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120774s
	[INFO] 10.244.2.2:47368 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105596s
	[INFO] 10.244.2.2:52953 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077623s
	[INFO] 10.244.1.2:45470 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011128s
	[INFO] 10.244.1.2:35601 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000157053s
	[INFO] 10.244.0.4:60925 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000610878s
	[INFO] 10.244.2.2:48335 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000176802s
	[INFO] 10.244.1.2:39758 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190843s
	[INFO] 10.244.1.2:35713 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110523s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1747&timeout=6m4s&timeoutSeconds=364&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1798&timeout=7m54s&timeoutSeconds=474&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1740&timeout=9m58s&timeoutSeconds=598&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-097312
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-097312
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-097312
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T12_57_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:57:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-097312
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:10:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:08:09 +0000   Mon, 23 Sep 2024 12:57:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:08:09 +0000   Mon, 23 Sep 2024 12:57:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:08:09 +0000   Mon, 23 Sep 2024 12:57:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:08:09 +0000   Mon, 23 Sep 2024 12:57:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.160
	  Hostname:    ha-097312
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fef43eb48e8a42b5815ed7c921d42333
	  System UUID:                fef43eb4-8e8a-42b5-815e-d7c921d42333
	  Boot ID:                    22749ef5-5a8a-4d9f-b42e-96dd2d4e32eb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4rksx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-6g9x2             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-7c65d6cfc9-txcxz             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-097312                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-j8l5t                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-097312             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-097312    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-drj8m                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-097312             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-097312                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m7s                   kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-097312 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-097312 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-097312 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-097312 event: Registered Node ha-097312 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-097312 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-097312 event: Registered Node ha-097312 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-097312 event: Registered Node ha-097312 in Controller
	  Warning  ContainerGCFailed        3m18s (x2 over 4m18s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             3m5s (x3 over 3m54s)   kubelet          Node ha-097312 status is now: NodeNotReady
	  Normal   RegisteredNode           2m9s                   node-controller  Node ha-097312 event: Registered Node ha-097312 in Controller
	  Normal   RegisteredNode           2m1s                   node-controller  Node ha-097312 event: Registered Node ha-097312 in Controller
	  Normal   RegisteredNode           36s                    node-controller  Node ha-097312 event: Registered Node ha-097312 in Controller
	
	
	Name:               ha-097312-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-097312-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-097312
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T12_57_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:57:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-097312-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:10:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:08:55 +0000   Mon, 23 Sep 2024 13:08:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:08:55 +0000   Mon, 23 Sep 2024 13:08:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:08:55 +0000   Mon, 23 Sep 2024 13:08:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:08:55 +0000   Mon, 23 Sep 2024 13:08:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.214
	  Hostname:    ha-097312-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 226ea4f6db5b44f7bdab73033cb7ae33
	  System UUID:                226ea4f6-db5b-44f7-bdab-73033cb7ae33
	  Boot ID:                    d82097e7-308e-44f7-a550-0d3292edbeaf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wz97n                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-097312-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-hcclj                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-097312-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-097312-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-z6ss5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-097312-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-097312-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m4s                   kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-097312-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-097312-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-097312-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-097312-m02 event: Registered Node ha-097312-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-097312-m02 event: Registered Node ha-097312-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-097312-m02 event: Registered Node ha-097312-m02 in Controller
	  Normal  NodeNotReady             8m37s                  node-controller  Node ha-097312-m02 status is now: NodeNotReady
	  Normal  Starting                 2m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m34s (x8 over 2m34s)  kubelet          Node ha-097312-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m34s (x8 over 2m34s)  kubelet          Node ha-097312-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m34s (x7 over 2m34s)  kubelet          Node ha-097312-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m9s                   node-controller  Node ha-097312-m02 event: Registered Node ha-097312-m02 in Controller
	  Normal  RegisteredNode           2m1s                   node-controller  Node ha-097312-m02 event: Registered Node ha-097312-m02 in Controller
	  Normal  RegisteredNode           36s                    node-controller  Node ha-097312-m02 event: Registered Node ha-097312-m02 in Controller
	
	
	Name:               ha-097312-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-097312-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-097312
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T12_59_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:59:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-097312-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:10:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:09:55 +0000   Mon, 23 Sep 2024 13:09:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:09:55 +0000   Mon, 23 Sep 2024 13:09:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:09:55 +0000   Mon, 23 Sep 2024 13:09:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:09:55 +0000   Mon, 23 Sep 2024 13:09:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.174
	  Hostname:    ha-097312-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 21b2a00385684360824371ae7a980598
	  System UUID:                21b2a003-8568-4360-8243-71ae7a980598
	  Boot ID:                    da5b83df-d090-405b-b569-ce4691aad1d4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-tx8b9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-097312-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-lcrdg                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-097312-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-097312-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-vs524                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-097312-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-097312-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 39s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-097312-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-097312-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-097312-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-097312-m03 event: Registered Node ha-097312-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-097312-m03 event: Registered Node ha-097312-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-097312-m03 event: Registered Node ha-097312-m03 in Controller
	  Normal   RegisteredNode           2m9s               node-controller  Node ha-097312-m03 event: Registered Node ha-097312-m03 in Controller
	  Normal   RegisteredNode           2m1s               node-controller  Node ha-097312-m03 event: Registered Node ha-097312-m03 in Controller
	  Normal   NodeNotReady             89s                node-controller  Node ha-097312-m03 status is now: NodeNotReady
	  Normal   Starting                 56s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  56s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 56s                kubelet          Node ha-097312-m03 has been rebooted, boot id: da5b83df-d090-405b-b569-ce4691aad1d4
	  Normal   NodeHasSufficientMemory  56s (x2 over 56s)  kubelet          Node ha-097312-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    56s (x2 over 56s)  kubelet          Node ha-097312-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     56s (x2 over 56s)  kubelet          Node ha-097312-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                56s                kubelet          Node ha-097312-m03 status is now: NodeReady
	  Normal   RegisteredNode           36s                node-controller  Node ha-097312-m03 event: Registered Node ha-097312-m03 in Controller
	
	
	Name:               ha-097312-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-097312-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-097312
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T13_00_25_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 13:00:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-097312-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:10:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:10:12 +0000   Mon, 23 Sep 2024 13:10:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:10:12 +0000   Mon, 23 Sep 2024 13:10:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:10:12 +0000   Mon, 23 Sep 2024 13:10:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:10:12 +0000   Mon, 23 Sep 2024 13:10:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.20
	  Hostname:    ha-097312-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 23903b49596849ed8163495c455231a4
	  System UUID:                23903b49-5968-49ed-8163-495c455231a4
	  Boot ID:                    08d8ee6f-9bbf-458d-8f61-8151e6dbaa95
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pzs94       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m56s
	  kube-system                 kube-proxy-7hlnw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4s                     kube-proxy       
	  Normal   Starting                 9m50s                  kube-proxy       
	  Normal   RegisteredNode           9m56s                  node-controller  Node ha-097312-m04 event: Registered Node ha-097312-m04 in Controller
	  Normal   RegisteredNode           9m56s                  node-controller  Node ha-097312-m04 event: Registered Node ha-097312-m04 in Controller
	  Normal   NodeHasSufficientMemory  9m56s (x2 over 9m57s)  kubelet          Node ha-097312-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m56s (x2 over 9m57s)  kubelet          Node ha-097312-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m56s (x2 over 9m57s)  kubelet          Node ha-097312-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m53s                  node-controller  Node ha-097312-m04 event: Registered Node ha-097312-m04 in Controller
	  Normal   NodeReady                9m36s                  kubelet          Node ha-097312-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m10s                  node-controller  Node ha-097312-m04 event: Registered Node ha-097312-m04 in Controller
	  Normal   RegisteredNode           2m2s                   node-controller  Node ha-097312-m04 event: Registered Node ha-097312-m04 in Controller
	  Normal   NodeNotReady             90s                    node-controller  Node ha-097312-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           37s                    node-controller  Node ha-097312-m04 event: Registered Node ha-097312-m04 in Controller
	  Normal   Starting                 9s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                     kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x2 over 9s)        kubelet          Node ha-097312-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x2 over 9s)        kubelet          Node ha-097312-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x2 over 9s)        kubelet          Node ha-097312-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s                     kubelet          Node ha-097312-m04 has been rebooted, boot id: 08d8ee6f-9bbf-458d-8f61-8151e6dbaa95
	  Normal   NodeReady                9s                     kubelet          Node ha-097312-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.704633] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.056129] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055848] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.170191] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.146996] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.300750] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +3.930853] systemd-fstab-generator[752]: Ignoring "noauto" option for root device
	[  +3.791133] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.059635] kauditd_printk_skb: 158 callbacks suppressed
	[Sep23 12:57] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.088641] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.268527] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.165221] kauditd_printk_skb: 38 callbacks suppressed
	[Sep23 12:58] kauditd_printk_skb: 24 callbacks suppressed
	[Sep23 13:07] systemd-fstab-generator[3542]: Ignoring "noauto" option for root device
	[  +0.151805] systemd-fstab-generator[3554]: Ignoring "noauto" option for root device
	[  +0.171785] systemd-fstab-generator[3568]: Ignoring "noauto" option for root device
	[  +0.153828] systemd-fstab-generator[3580]: Ignoring "noauto" option for root device
	[  +0.296871] systemd-fstab-generator[3608]: Ignoring "noauto" option for root device
	[  +2.935420] systemd-fstab-generator[3703]: Ignoring "noauto" option for root device
	[  +6.615420] kauditd_printk_skb: 132 callbacks suppressed
	[ +12.051283] kauditd_printk_skb: 75 callbacks suppressed
	[ +10.055590] kauditd_printk_skb: 1 callbacks suppressed
	[Sep23 13:08] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [1c4427bd859a561476ab926494c2f6a0c2babe6ae47fd7b07d495a1ccb47adbb] <==
	{"level":"warn","ts":"2024-09-23T13:09:21.206467Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"78afa68a47379fab","rtt":"0s","error":"dial tcp 192.168.39.174:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T13:09:21.206561Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"78afa68a47379fab","rtt":"0s","error":"dial tcp 192.168.39.174:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T13:09:23.599227Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.174:2380/version","remote-member-id":"78afa68a47379fab","error":"Get \"https://192.168.39.174:2380/version\": dial tcp 192.168.39.174:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T13:09:23.599341Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"78afa68a47379fab","error":"Get \"https://192.168.39.174:2380/version\": dial tcp 192.168.39.174:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T13:09:26.207197Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"78afa68a47379fab","rtt":"0s","error":"dial tcp 192.168.39.174:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T13:09:26.207348Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"78afa68a47379fab","rtt":"0s","error":"dial tcp 192.168.39.174:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T13:09:27.601446Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.174:2380/version","remote-member-id":"78afa68a47379fab","error":"Get \"https://192.168.39.174:2380/version\": dial tcp 192.168.39.174:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T13:09:27.601512Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"78afa68a47379fab","error":"Get \"https://192.168.39.174:2380/version\": dial tcp 192.168.39.174:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T13:09:31.207804Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"78afa68a47379fab","rtt":"0s","error":"dial tcp 192.168.39.174:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T13:09:31.207894Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"78afa68a47379fab","rtt":"0s","error":"dial tcp 192.168.39.174:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T13:09:31.603283Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.174:2380/version","remote-member-id":"78afa68a47379fab","error":"Get \"https://192.168.39.174:2380/version\": dial tcp 192.168.39.174:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T13:09:31.603448Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"78afa68a47379fab","error":"Get \"https://192.168.39.174:2380/version\": dial tcp 192.168.39.174:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T13:09:35.606134Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.174:2380/version","remote-member-id":"78afa68a47379fab","error":"Get \"https://192.168.39.174:2380/version\": dial tcp 192.168.39.174:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T13:09:35.606307Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"78afa68a47379fab","error":"Get \"https://192.168.39.174:2380/version\": dial tcp 192.168.39.174:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T13:09:36.208950Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"78afa68a47379fab","rtt":"0s","error":"dial tcp 192.168.39.174:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T13:09:36.209025Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"78afa68a47379fab","rtt":"0s","error":"dial tcp 192.168.39.174:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-23T13:09:36.424086Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"78afa68a47379fab"}
	{"level":"info","ts":"2024-09-23T13:09:36.424266Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b56431cc78e971c","remote-peer-id":"78afa68a47379fab"}
	{"level":"info","ts":"2024-09-23T13:09:36.429851Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6b56431cc78e971c","remote-peer-id":"78afa68a47379fab"}
	{"level":"info","ts":"2024-09-23T13:09:36.440124Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6b56431cc78e971c","to":"78afa68a47379fab","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-23T13:09:36.440269Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6b56431cc78e971c","remote-peer-id":"78afa68a47379fab"}
	{"level":"info","ts":"2024-09-23T13:09:36.443729Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6b56431cc78e971c","to":"78afa68a47379fab","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-23T13:09:36.443861Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"6b56431cc78e971c","remote-peer-id":"78afa68a47379fab"}
	{"level":"warn","ts":"2024-09-23T13:10:15.816259Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.600782ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-097312-m04\" ","response":"range_response_count:1 size:3394"}
	{"level":"info","ts":"2024-09-23T13:10:15.816613Z","caller":"traceutil/trace.go:171","msg":"trace[682167175] range","detail":"{range_begin:/registry/minions/ha-097312-m04; range_end:; response_count:1; response_revision:2513; }","duration":"148.008469ms","start":"2024-09-23T13:10:15.668577Z","end":"2024-09-23T13:10:15.816585Z","steps":["trace[682167175] 'range keys from in-memory index tree'  (duration: 146.385978ms)"],"step_count":1}
	
	
	==> etcd [9bfbdbe2c35f63b185f28992c717601392287e693216d7332cfd0b4b6597c8ad] <==
	{"level":"warn","ts":"2024-09-23T13:05:47.883071Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T13:05:46.700228Z","time spent":"1.182836709s","remote":"127.0.0.1:35406","response type":"/etcdserverpb.KV/Range","request count":0,"request size":91,"response count":0,"response size":0,"request content":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" limit:10000 "}
	2024/09/23 13:05:47 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-23T13:05:47.943576Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.160:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T13:05:47.943739Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.160:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-23T13:05:47.945279Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6b56431cc78e971c","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-23T13:05:47.945570Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e470b762e3b365ab"}
	{"level":"info","ts":"2024-09-23T13:05:47.945660Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e470b762e3b365ab"}
	{"level":"info","ts":"2024-09-23T13:05:47.945705Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e470b762e3b365ab"}
	{"level":"info","ts":"2024-09-23T13:05:47.945841Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab"}
	{"level":"info","ts":"2024-09-23T13:05:47.945896Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab"}
	{"level":"info","ts":"2024-09-23T13:05:47.945950Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab"}
	{"level":"info","ts":"2024-09-23T13:05:47.945977Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e470b762e3b365ab"}
	{"level":"info","ts":"2024-09-23T13:05:47.946000Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"78afa68a47379fab"}
	{"level":"info","ts":"2024-09-23T13:05:47.946028Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"78afa68a47379fab"}
	{"level":"info","ts":"2024-09-23T13:05:47.946126Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"78afa68a47379fab"}
	{"level":"info","ts":"2024-09-23T13:05:47.946323Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b56431cc78e971c","remote-peer-id":"78afa68a47379fab"}
	{"level":"info","ts":"2024-09-23T13:05:47.946389Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b56431cc78e971c","remote-peer-id":"78afa68a47379fab"}
	{"level":"info","ts":"2024-09-23T13:05:47.946441Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b56431cc78e971c","remote-peer-id":"78afa68a47379fab"}
	{"level":"info","ts":"2024-09-23T13:05:47.946468Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"78afa68a47379fab"}
	{"level":"info","ts":"2024-09-23T13:05:47.951521Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.160:2380"}
	{"level":"warn","ts":"2024-09-23T13:05:47.951546Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.26307996s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-23T13:05:47.951712Z","caller":"traceutil/trace.go:171","msg":"trace[672836981] range","detail":"{range_begin:; range_end:; }","duration":"9.263262793s","start":"2024-09-23T13:05:38.688440Z","end":"2024-09-23T13:05:47.951702Z","steps":["trace[672836981] 'agreement among raft nodes before linearized reading'  (duration: 9.263077519s)"],"step_count":1}
	{"level":"error","ts":"2024-09-23T13:05:47.951772Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-23T13:05:47.951730Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.160:2380"}
	{"level":"info","ts":"2024-09-23T13:05:47.951842Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-097312","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.160:2380"],"advertise-client-urls":["https://192.168.39.160:2379"]}
	
	
	==> kernel <==
	 13:10:21 up 13 min,  0 users,  load average: 0.59, 0.45, 0.29
	Linux ha-097312 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [03670fd92c8a80c9d88e88b722428ce8ea7ed15a32a25c8c4c948685c15fe41c] <==
	I0923 13:05:19.635787       1 main.go:295] Handling node with IPs: map[192.168.39.174:{}]
	I0923 13:05:19.635838       1 main.go:322] Node ha-097312-m03 has CIDR [10.244.2.0/24] 
	I0923 13:05:19.636002       1 main.go:295] Handling node with IPs: map[192.168.39.20:{}]
	I0923 13:05:19.636028       1 main.go:322] Node ha-097312-m04 has CIDR [10.244.3.0/24] 
	I0923 13:05:19.636074       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0923 13:05:19.636081       1 main.go:299] handling current node
	I0923 13:05:19.636092       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0923 13:05:19.636096       1 main.go:322] Node ha-097312-m02 has CIDR [10.244.1.0/24] 
	E0923 13:05:26.765078       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1798&timeout=8m22s&timeoutSeconds=502&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	I0923 13:05:29.635162       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0923 13:05:29.635212       1 main.go:299] handling current node
	I0923 13:05:29.635229       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0923 13:05:29.635235       1 main.go:322] Node ha-097312-m02 has CIDR [10.244.1.0/24] 
	I0923 13:05:29.635376       1 main.go:295] Handling node with IPs: map[192.168.39.174:{}]
	I0923 13:05:29.635395       1 main.go:322] Node ha-097312-m03 has CIDR [10.244.2.0/24] 
	I0923 13:05:29.635438       1 main.go:295] Handling node with IPs: map[192.168.39.20:{}]
	I0923 13:05:29.635443       1 main.go:322] Node ha-097312-m04 has CIDR [10.244.3.0/24] 
	I0923 13:05:39.644516       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0923 13:05:39.644682       1 main.go:299] handling current node
	I0923 13:05:39.644716       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0923 13:05:39.644740       1 main.go:322] Node ha-097312-m02 has CIDR [10.244.1.0/24] 
	I0923 13:05:39.644975       1 main.go:295] Handling node with IPs: map[192.168.39.174:{}]
	I0923 13:05:39.645100       1 main.go:322] Node ha-097312-m03 has CIDR [10.244.2.0/24] 
	I0923 13:05:39.645189       1 main.go:295] Handling node with IPs: map[192.168.39.20:{}]
	I0923 13:05:39.645210       1 main.go:322] Node ha-097312-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [fddebc96422aa20750bd4deb2fa7a71b609a0b73820282e5572365906bad733d] <==
	I0923 13:09:41.351678       1 main.go:322] Node ha-097312-m04 has CIDR [10.244.3.0/24] 
	I0923 13:09:51.345918       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0923 13:09:51.345964       1 main.go:299] handling current node
	I0923 13:09:51.345979       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0923 13:09:51.345985       1 main.go:322] Node ha-097312-m02 has CIDR [10.244.1.0/24] 
	I0923 13:09:51.346164       1 main.go:295] Handling node with IPs: map[192.168.39.174:{}]
	I0923 13:09:51.346190       1 main.go:322] Node ha-097312-m03 has CIDR [10.244.2.0/24] 
	I0923 13:09:51.346258       1 main.go:295] Handling node with IPs: map[192.168.39.20:{}]
	I0923 13:09:51.346274       1 main.go:322] Node ha-097312-m04 has CIDR [10.244.3.0/24] 
	I0923 13:10:01.347780       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0923 13:10:01.347861       1 main.go:322] Node ha-097312-m02 has CIDR [10.244.1.0/24] 
	I0923 13:10:01.348025       1 main.go:295] Handling node with IPs: map[192.168.39.174:{}]
	I0923 13:10:01.348047       1 main.go:322] Node ha-097312-m03 has CIDR [10.244.2.0/24] 
	I0923 13:10:01.348107       1 main.go:295] Handling node with IPs: map[192.168.39.20:{}]
	I0923 13:10:01.348122       1 main.go:322] Node ha-097312-m04 has CIDR [10.244.3.0/24] 
	I0923 13:10:01.348181       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0923 13:10:01.348190       1 main.go:299] handling current node
	I0923 13:10:11.353451       1 main.go:295] Handling node with IPs: map[192.168.39.174:{}]
	I0923 13:10:11.353602       1 main.go:322] Node ha-097312-m03 has CIDR [10.244.2.0/24] 
	I0923 13:10:11.353884       1 main.go:295] Handling node with IPs: map[192.168.39.20:{}]
	I0923 13:10:11.353925       1 main.go:322] Node ha-097312-m04 has CIDR [10.244.3.0/24] 
	I0923 13:10:11.354040       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0923 13:10:11.354074       1 main.go:299] handling current node
	I0923 13:10:11.354109       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0923 13:10:11.354121       1 main.go:322] Node ha-097312-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [1f3d21af63b5abeda02040f268e6cc8e42b9f5c0d833e8e462586290e7f1d4c6] <==
	I0923 13:08:07.318149       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I0923 13:08:07.237145       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0923 13:08:07.346131       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0923 13:08:07.347574       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0923 13:08:07.347711       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0923 13:08:07.358076       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0923 13:08:07.358665       1 aggregator.go:171] initial CRD sync complete...
	I0923 13:08:07.358777       1 autoregister_controller.go:144] Starting autoregister controller
	I0923 13:08:07.358846       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0923 13:08:07.358950       1 cache.go:39] Caches are synced for autoregister controller
	I0923 13:08:07.364287       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0923 13:08:07.381264       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0923 13:08:07.381366       1 policy_source.go:224] refreshing policies
	W0923 13:08:07.381718       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.174]
	I0923 13:08:07.384081       1 controller.go:615] quota admission added evaluator for: endpoints
	I0923 13:08:07.405710       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0923 13:08:07.414076       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0923 13:08:07.418198       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0923 13:08:07.418352       1 shared_informer.go:320] Caches are synced for configmaps
	I0923 13:08:07.436510       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0923 13:08:07.438221       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0923 13:08:07.443350       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0923 13:08:07.469912       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0923 13:08:08.250797       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0923 13:08:08.734248       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.160 192.168.39.174]
	
	
	==> kube-apiserver [f3bb38d637855f4ef3d09f67d9d173fa2f585dfbf0fd48555c4d80a36d7a8096] <==
	I0923 13:07:30.126591       1 options.go:228] external host was not specified, using 192.168.39.160
	I0923 13:07:30.129914       1 server.go:142] Version: v1.31.1
	I0923 13:07:30.129966       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:07:31.071830       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0923 13:07:31.078049       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0923 13:07:31.078478       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0923 13:07:31.078717       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0923 13:07:31.079040       1 instance.go:232] Using reconciler: lease
	W0923 13:07:51.062203       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0923 13:07:51.062304       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0923 13:07:51.080240       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0923 13:07:51.080248       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [063ee6e5cb4852bcf99e81658a40b6a882427bed47d9cff993d0a1d51f047fab] <==
	I0923 13:07:31.369865       1 serving.go:386] Generated self-signed cert in-memory
	I0923 13:07:31.569092       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0923 13:07:31.571140       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:07:31.573217       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0923 13:07:31.573356       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0923 13:07:31.573427       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0923 13:07:31.573438       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0923 13:07:52.087869       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.160:8443/healthz\": dial tcp 192.168.39.160:8443: connect: connection refused"
	
	
	==> kube-controller-manager [327bfbcf6b79a16f1a5d0c94377815fc5fbb5bc82e544e68403e7ec0e90448e8] <==
	I0923 13:08:50.250494       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="107.479µs"
	I0923 13:08:51.589142       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:08:51.593812       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m03"
	I0923 13:08:51.614083       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:08:51.620532       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m03"
	I0923 13:08:51.673116       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="19.747326ms"
	I0923 13:08:51.673427       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="204.505µs"
	I0923 13:08:54.397491       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m03"
	I0923 13:08:55.216343       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m02"
	I0923 13:08:56.849050       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m03"
	I0923 13:09:04.470827       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:09:06.939196       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:09:24.399113       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m03"
	I0923 13:09:24.420074       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m03"
	I0923 13:09:25.433175       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="60.102µs"
	I0923 13:09:26.846475       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m03"
	I0923 13:09:44.407285       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:09:44.511757       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:09:45.604831       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.833373ms"
	I0923 13:09:45.604982       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="52.871µs"
	I0923 13:09:55.216836       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m03"
	I0923 13:10:12.233772       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:10:12.233979       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-097312-m04"
	I0923 13:10:12.262586       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:10:14.381432       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	
	
	==> kube-proxy [37b6ad938698e107c07b01a67dcc4f6f6f2895a6b2ddc7a269056adab117c0ce] <==
	E0923 13:04:28.461329       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1724\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 13:04:31.534469       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 13:04:31.534555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 13:04:31.534498       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1724": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 13:04:31.534825       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1724\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 13:04:31.534691       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-097312&resourceVersion=1710": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 13:04:31.535025       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-097312&resourceVersion=1710\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 13:04:37.676134       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 13:04:37.676214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 13:04:37.676286       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-097312&resourceVersion=1710": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 13:04:37.677122       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-097312&resourceVersion=1710\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 13:04:37.677448       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1724": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 13:04:37.677489       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1724\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 13:04:46.892917       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-097312&resourceVersion=1710": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 13:04:46.893465       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-097312&resourceVersion=1710\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 13:04:49.965185       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1724": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 13:04:49.965405       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1724\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 13:04:49.965692       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 13:04:49.965791       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 13:05:08.396897       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1724": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 13:05:08.397298       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1724\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 13:05:11.468720       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-097312&resourceVersion=1710": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 13:05:11.468781       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-097312&resourceVersion=1710\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 13:05:14.541131       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 13:05:14.541985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [6527aca4afc7a32189ae29950d639e34564886f591210b00866727f72ecf2617] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 13:07:32.780099       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-097312\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0923 13:07:35.854174       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-097312\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0923 13:07:38.924888       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-097312\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0923 13:07:45.069932       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-097312\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0923 13:07:54.285672       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-097312\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0923 13:08:13.276174       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.160"]
	E0923 13:08:13.276324       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 13:08:13.317003       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 13:08:13.317078       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 13:08:13.317112       1 server_linux.go:169] "Using iptables Proxier"
	I0923 13:08:13.321518       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 13:08:13.322030       1 server.go:483] "Version info" version="v1.31.1"
	I0923 13:08:13.322065       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:08:13.325303       1 config.go:199] "Starting service config controller"
	I0923 13:08:13.325426       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 13:08:13.325575       1 config.go:105] "Starting endpoint slice config controller"
	I0923 13:08:13.325728       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 13:08:13.327678       1 config.go:328] "Starting node config controller"
	I0923 13:08:13.327839       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 13:08:13.426756       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 13:08:13.426846       1 shared_informer.go:320] Caches are synced for service config
	I0923 13:08:13.430348       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0267858752e0468b892f98165ea7b1e17a2afde6ca05faccacf5ab35984ae965] <==
	W0923 13:08:00.033850       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.160:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.160:8443: connect: connection refused
	E0923 13:08:00.033959       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.160:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8443: connect: connection refused" logger="UnhandledError"
	W0923 13:08:00.149731       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.160:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.160:8443: connect: connection refused
	E0923 13:08:00.149779       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.160:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8443: connect: connection refused" logger="UnhandledError"
	W0923 13:08:00.224373       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.160:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.160:8443: connect: connection refused
	E0923 13:08:00.224465       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.160:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8443: connect: connection refused" logger="UnhandledError"
	W0923 13:08:00.329449       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.160:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.160:8443: connect: connection refused
	E0923 13:08:00.329595       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.160:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8443: connect: connection refused" logger="UnhandledError"
	W0923 13:08:00.396316       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.160:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.160:8443: connect: connection refused
	E0923 13:08:00.396386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.160:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8443: connect: connection refused" logger="UnhandledError"
	W0923 13:08:00.667000       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.160:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.160:8443: connect: connection refused
	E0923 13:08:00.667073       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.160:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8443: connect: connection refused" logger="UnhandledError"
	W0923 13:08:01.203706       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.160:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.160:8443: connect: connection refused
	E0923 13:08:01.203841       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.160:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8443: connect: connection refused" logger="UnhandledError"
	W0923 13:08:01.344155       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.160:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.160:8443: connect: connection refused
	E0923 13:08:01.344303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.160:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8443: connect: connection refused" logger="UnhandledError"
	W0923 13:08:01.878611       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.160:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.160:8443: connect: connection refused
	E0923 13:08:01.878892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.160:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8443: connect: connection refused" logger="UnhandledError"
	W0923 13:08:07.258185       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 13:08:07.258276       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 13:08:07.258465       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 13:08:07.258539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:08:07.259334       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 13:08:07.259421       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0923 13:08:11.393527       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [5c9e8fb5e944bc800446956248067c039e5c452de2651adf100841c5f062a431] <==
	E0923 12:57:00.190177       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.223708       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 12:57:00.223794       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.255027       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 12:57:00.255136       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.582968       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 12:57:00.583073       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0923 12:57:02.534371       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0923 12:59:14.854178       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-vs524\": pod kube-proxy-vs524 is already assigned to node \"ha-097312-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-vs524" node="ha-097312-m03"
	E0923 12:59:14.854357       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 92738649-c52b-44d5-866b-8cda751a538c(kube-system/kube-proxy-vs524) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-vs524"
	E0923 12:59:14.854394       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-vs524\": pod kube-proxy-vs524 is already assigned to node \"ha-097312-m03\"" pod="kube-system/kube-proxy-vs524"
	I0923 12:59:14.854436       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-vs524" node="ha-097312-m03"
	E0923 13:05:38.218871       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0923 13:05:38.347872       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0923 13:05:38.559003       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0923 13:05:38.616843       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0923 13:05:38.689154       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0923 13:05:40.389602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0923 13:05:41.850274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0923 13:05:43.602431       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0923 13:05:43.741733       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0923 13:05:45.717897       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0923 13:05:45.847800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0923 13:05:46.585273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0923 13:05:47.874452       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 23 13:09:02 ha-097312 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 13:09:02 ha-097312 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 13:09:02 ha-097312 kubelet[1304]: E0923 13:09:02.314609    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096942314363339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:09:02 ha-097312 kubelet[1304]: E0923 13:09:02.314683    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096942314363339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:09:12 ha-097312 kubelet[1304]: E0923 13:09:12.316028    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096952315434685,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:09:12 ha-097312 kubelet[1304]: E0923 13:09:12.316106    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096952315434685,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:09:15 ha-097312 kubelet[1304]: I0923 13:09:15.139571    1304 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-097312" podUID="b26dfdf8-fa4b-4822-a88c-fe7af53be81b"
	Sep 23 13:09:15 ha-097312 kubelet[1304]: I0923 13:09:15.161435    1304 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-097312"
	Sep 23 13:09:22 ha-097312 kubelet[1304]: E0923 13:09:22.322510    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096962318563945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:09:22 ha-097312 kubelet[1304]: E0923 13:09:22.322534    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096962318563945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:09:32 ha-097312 kubelet[1304]: E0923 13:09:32.328135    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096972327830994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:09:32 ha-097312 kubelet[1304]: E0923 13:09:32.328160    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096972327830994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:09:42 ha-097312 kubelet[1304]: E0923 13:09:42.330381    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096982329979126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:09:42 ha-097312 kubelet[1304]: E0923 13:09:42.330459    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096982329979126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:09:52 ha-097312 kubelet[1304]: E0923 13:09:52.332192    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096992331868803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:09:52 ha-097312 kubelet[1304]: E0923 13:09:52.332287    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727096992331868803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:10:02 ha-097312 kubelet[1304]: E0923 13:10:02.165774    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 13:10:02 ha-097312 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 13:10:02 ha-097312 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 13:10:02 ha-097312 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 13:10:02 ha-097312 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 13:10:02 ha-097312 kubelet[1304]: E0923 13:10:02.335129    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097002334426159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:10:02 ha-097312 kubelet[1304]: E0923 13:10:02.335233    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097002334426159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:10:12 ha-097312 kubelet[1304]: E0923 13:10:12.339314    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097012338827972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:10:12 ha-097312 kubelet[1304]: E0923 13:10:12.339815    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097012338827972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 13:10:19.817763  689478 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19690-662205/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-097312 -n ha-097312
helpers_test.go:261: (dbg) Run:  kubectl --context ha-097312 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (397.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-097312 stop -v=7 --alsologtostderr: exit status 82 (2m0.493019643s)

                                                
                                                
-- stdout --
	* Stopping node "ha-097312-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 13:10:39.563811  689922 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:10:39.563934  689922 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:10:39.563943  689922 out.go:358] Setting ErrFile to fd 2...
	I0923 13:10:39.563950  689922 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:10:39.564177  689922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-662205/.minikube/bin
	I0923 13:10:39.564439  689922 out.go:352] Setting JSON to false
	I0923 13:10:39.564525  689922 mustload.go:65] Loading cluster: ha-097312
	I0923 13:10:39.564944  689922 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:10:39.565033  689922 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 13:10:39.565210  689922 mustload.go:65] Loading cluster: ha-097312
	I0923 13:10:39.565336  689922 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:10:39.565365  689922 stop.go:39] StopHost: ha-097312-m04
	I0923 13:10:39.565752  689922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:10:39.565799  689922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:10:39.581521  689922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39517
	I0923 13:10:39.581998  689922 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:10:39.582556  689922 main.go:141] libmachine: Using API Version  1
	I0923 13:10:39.582579  689922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:10:39.583036  689922 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:10:39.585484  689922 out.go:177] * Stopping node "ha-097312-m04"  ...
	I0923 13:10:39.586839  689922 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0923 13:10:39.586881  689922 main.go:141] libmachine: (ha-097312-m04) Calling .DriverName
	I0923 13:10:39.587113  689922 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0923 13:10:39.587156  689922 main.go:141] libmachine: (ha-097312-m04) Calling .GetSSHHostname
	I0923 13:10:39.590053  689922 main.go:141] libmachine: (ha-097312-m04) DBG | domain ha-097312-m04 has defined MAC address 52:54:00:b7:b6:3b in network mk-ha-097312
	I0923 13:10:39.590555  689922 main.go:141] libmachine: (ha-097312-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:b6:3b", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 14:10:06 +0000 UTC Type:0 Mac:52:54:00:b7:b6:3b Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-097312-m04 Clientid:01:52:54:00:b7:b6:3b}
	I0923 13:10:39.590598  689922 main.go:141] libmachine: (ha-097312-m04) DBG | domain ha-097312-m04 has defined IP address 192.168.39.20 and MAC address 52:54:00:b7:b6:3b in network mk-ha-097312
	I0923 13:10:39.590737  689922 main.go:141] libmachine: (ha-097312-m04) Calling .GetSSHPort
	I0923 13:10:39.590922  689922 main.go:141] libmachine: (ha-097312-m04) Calling .GetSSHKeyPath
	I0923 13:10:39.591073  689922 main.go:141] libmachine: (ha-097312-m04) Calling .GetSSHUsername
	I0923 13:10:39.591208  689922 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312-m04/id_rsa Username:docker}
	I0923 13:10:39.672313  689922 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0923 13:10:39.725090  689922 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0923 13:10:39.777906  689922 main.go:141] libmachine: Stopping "ha-097312-m04"...
	I0923 13:10:39.778040  689922 main.go:141] libmachine: (ha-097312-m04) Calling .GetState
	I0923 13:10:39.780083  689922 main.go:141] libmachine: (ha-097312-m04) Calling .Stop
	I0923 13:10:39.784210  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 0/120
	I0923 13:10:40.785391  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 1/120
	I0923 13:10:41.786974  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 2/120
	I0923 13:10:42.788483  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 3/120
	I0923 13:10:43.790028  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 4/120
	I0923 13:10:44.792481  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 5/120
	I0923 13:10:45.794263  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 6/120
	I0923 13:10:46.796554  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 7/120
	I0923 13:10:47.798098  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 8/120
	I0923 13:10:48.799778  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 9/120
	I0923 13:10:49.802583  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 10/120
	I0923 13:10:50.804646  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 11/120
	I0923 13:10:51.806407  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 12/120
	I0923 13:10:52.808190  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 13/120
	I0923 13:10:53.809766  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 14/120
	I0923 13:10:54.812315  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 15/120
	I0923 13:10:55.814078  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 16/120
	I0923 13:10:56.815633  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 17/120
	I0923 13:10:57.817166  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 18/120
	I0923 13:10:58.818525  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 19/120
	I0923 13:10:59.821352  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 20/120
	I0923 13:11:00.823433  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 21/120
	I0923 13:11:01.824893  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 22/120
	I0923 13:11:02.826454  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 23/120
	I0923 13:11:03.828984  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 24/120
	I0923 13:11:04.831580  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 25/120
	I0923 13:11:05.833363  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 26/120
	I0923 13:11:06.835052  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 27/120
	I0923 13:11:07.836725  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 28/120
	I0923 13:11:08.839182  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 29/120
	I0923 13:11:09.841801  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 30/120
	I0923 13:11:10.843365  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 31/120
	I0923 13:11:11.844870  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 32/120
	I0923 13:11:12.846326  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 33/120
	I0923 13:11:13.847877  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 34/120
	I0923 13:11:14.850590  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 35/120
	I0923 13:11:15.852483  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 36/120
	I0923 13:11:16.853910  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 37/120
	I0923 13:11:17.855798  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 38/120
	I0923 13:11:18.857121  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 39/120
	I0923 13:11:19.859786  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 40/120
	I0923 13:11:20.861405  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 41/120
	I0923 13:11:21.862845  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 42/120
	I0923 13:11:22.865425  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 43/120
	I0923 13:11:23.867038  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 44/120
	I0923 13:11:24.869325  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 45/120
	I0923 13:11:25.871055  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 46/120
	I0923 13:11:26.873028  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 47/120
	I0923 13:11:27.874574  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 48/120
	I0923 13:11:28.876251  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 49/120
	I0923 13:11:29.878487  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 50/120
	I0923 13:11:30.880700  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 51/120
	I0923 13:11:31.882298  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 52/120
	I0923 13:11:32.884323  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 53/120
	I0923 13:11:33.885810  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 54/120
	I0923 13:11:34.888001  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 55/120
	I0923 13:11:35.889621  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 56/120
	I0923 13:11:36.891021  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 57/120
	I0923 13:11:37.892360  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 58/120
	I0923 13:11:38.893768  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 59/120
	I0923 13:11:39.895176  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 60/120
	I0923 13:11:40.896541  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 61/120
	I0923 13:11:41.897852  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 62/120
	I0923 13:11:42.899225  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 63/120
	I0923 13:11:43.900736  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 64/120
	I0923 13:11:44.903052  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 65/120
	I0923 13:11:45.904605  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 66/120
	I0923 13:11:46.906121  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 67/120
	I0923 13:11:47.907730  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 68/120
	I0923 13:11:48.909199  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 69/120
	I0923 13:11:49.911190  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 70/120
	I0923 13:11:50.912727  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 71/120
	I0923 13:11:51.914375  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 72/120
	I0923 13:11:52.915768  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 73/120
	I0923 13:11:53.917539  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 74/120
	I0923 13:11:54.919571  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 75/120
	I0923 13:11:55.920972  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 76/120
	I0923 13:11:56.922716  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 77/120
	I0923 13:11:57.924346  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 78/120
	I0923 13:11:58.925978  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 79/120
	I0923 13:11:59.928231  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 80/120
	I0923 13:12:00.929741  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 81/120
	I0923 13:12:01.931703  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 82/120
	I0923 13:12:02.933389  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 83/120
	I0923 13:12:03.934870  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 84/120
	I0923 13:12:04.936947  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 85/120
	I0923 13:12:05.938942  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 86/120
	I0923 13:12:06.940777  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 87/120
	I0923 13:12:07.942389  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 88/120
	I0923 13:12:08.944033  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 89/120
	I0923 13:12:09.946841  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 90/120
	I0923 13:12:10.948679  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 91/120
	I0923 13:12:11.950457  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 92/120
	I0923 13:12:12.953067  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 93/120
	I0923 13:12:13.954994  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 94/120
	I0923 13:12:14.957391  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 95/120
	I0923 13:12:15.959136  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 96/120
	I0923 13:12:16.960719  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 97/120
	I0923 13:12:17.962326  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 98/120
	I0923 13:12:18.963968  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 99/120
	I0923 13:12:19.965206  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 100/120
	I0923 13:12:20.966960  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 101/120
	I0923 13:12:21.968605  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 102/120
	I0923 13:12:22.970410  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 103/120
	I0923 13:12:23.972528  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 104/120
	I0923 13:12:24.974809  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 105/120
	I0923 13:12:25.976439  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 106/120
	I0923 13:12:26.978148  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 107/120
	I0923 13:12:27.980746  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 108/120
	I0923 13:12:28.982361  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 109/120
	I0923 13:12:29.984873  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 110/120
	I0923 13:12:30.986548  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 111/120
	I0923 13:12:31.988432  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 112/120
	I0923 13:12:32.989852  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 113/120
	I0923 13:12:33.991110  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 114/120
	I0923 13:12:34.992958  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 115/120
	I0923 13:12:35.994273  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 116/120
	I0923 13:12:36.996481  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 117/120
	I0923 13:12:37.998303  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 118/120
	I0923 13:12:39.000664  689922 main.go:141] libmachine: (ha-097312-m04) Waiting for machine to stop 119/120
	I0923 13:12:40.001586  689922 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0923 13:12:40.001679  689922 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0923 13:12:40.003609  689922 out.go:201] 
	W0923 13:12:40.005000  689922 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0923 13:12:40.005018  689922 out.go:270] * 
	* 
	W0923 13:12:40.008234  689922 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 13:12:40.009760  689922 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-097312 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Done: out/minikube-linux-amd64 -p ha-097312 status -v=7 --alsologtostderr: (19.127694761s)
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-097312 status -v=7 --alsologtostderr": 
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-097312 status -v=7 --alsologtostderr": 
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-097312 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-097312 -n ha-097312
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-097312 logs -n 25: (1.766510968s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-097312 ssh -n ha-097312-m02 sudo cat                                          | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m03_ha-097312-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m03:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04:/home/docker/cp-test_ha-097312-m03_ha-097312-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n ha-097312-m04 sudo cat                                          | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m03_ha-097312-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-097312 cp testdata/cp-test.txt                                                | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m04:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3809348295/001/cp-test_ha-097312-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m04:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312:/home/docker/cp-test_ha-097312-m04_ha-097312.txt                       |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n ha-097312 sudo cat                                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m04_ha-097312.txt                                 |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m04:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m02:/home/docker/cp-test_ha-097312-m04_ha-097312-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n ha-097312-m02 sudo cat                                          | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m04_ha-097312-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-097312 cp ha-097312-m04:/home/docker/cp-test.txt                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m03:/home/docker/cp-test_ha-097312-m04_ha-097312-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n                                                                 | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | ha-097312-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-097312 ssh -n ha-097312-m03 sudo cat                                          | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC | 23 Sep 24 13:01 UTC |
	|         | /home/docker/cp-test_ha-097312-m04_ha-097312-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-097312 node stop m02 -v=7                                                     | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-097312 node start m02 -v=7                                                    | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-097312 -v=7                                                           | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-097312 -v=7                                                                | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-097312 --wait=true -v=7                                                    | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:05 UTC | 23 Sep 24 13:10 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-097312                                                                | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:10 UTC |                     |
	| node    | ha-097312 node delete m03 -v=7                                                   | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:10 UTC | 23 Sep 24 13:10 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-097312 stop -v=7                                                              | ha-097312 | jenkins | v1.34.0 | 23 Sep 24 13:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 13:05:46
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 13:05:46.696980  688055 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:05:46.697121  688055 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:05:46.697131  688055 out.go:358] Setting ErrFile to fd 2...
	I0923 13:05:46.697136  688055 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:05:46.697351  688055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-662205/.minikube/bin
	I0923 13:05:46.697990  688055 out.go:352] Setting JSON to false
	I0923 13:05:46.699028  688055 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":10090,"bootTime":1727086657,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 13:05:46.699152  688055 start.go:139] virtualization: kvm guest
	I0923 13:05:46.701525  688055 out.go:177] * [ha-097312] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 13:05:46.703391  688055 notify.go:220] Checking for updates...
	I0923 13:05:46.703416  688055 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 13:05:46.704761  688055 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:05:46.706280  688055 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 13:05:46.707595  688055 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 13:05:46.709085  688055 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 13:05:46.710370  688055 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 13:05:46.712056  688055 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:05:46.712200  688055 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:05:46.712676  688055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:05:46.712739  688055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:05:46.728805  688055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44823
	I0923 13:05:46.729375  688055 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:05:46.730008  688055 main.go:141] libmachine: Using API Version  1
	I0923 13:05:46.730042  688055 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:05:46.730500  688055 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:05:46.730746  688055 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 13:05:46.771029  688055 out.go:177] * Using the kvm2 driver based on existing profile
	I0923 13:05:46.772668  688055 start.go:297] selected driver: kvm2
	I0923 13:05:46.772687  688055 start.go:901] validating driver "kvm2" against &{Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.20 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:05:46.772836  688055 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 13:05:46.773208  688055 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 13:05:46.773321  688055 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19690-662205/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 13:05:46.789171  688055 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 13:05:46.790017  688055 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:05:46.790072  688055 cni.go:84] Creating CNI manager for ""
	I0923 13:05:46.790148  688055 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0923 13:05:46.790213  688055 start.go:340] cluster config:
	{Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-097312 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.20 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:
false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:05:46.790364  688055 iso.go:125] acquiring lock: {Name:mkb968a95eae3838cd5c328cf3385c2ef4ff2c8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 13:05:46.792737  688055 out.go:177] * Starting "ha-097312" primary control-plane node in "ha-097312" cluster
	I0923 13:05:46.794515  688055 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 13:05:46.794580  688055 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 13:05:46.794590  688055 cache.go:56] Caching tarball of preloaded images
	I0923 13:05:46.794686  688055 preload.go:172] Found /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 13:05:46.794697  688055 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 13:05:46.794833  688055 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/config.json ...
	I0923 13:05:46.795060  688055 start.go:360] acquireMachinesLock for ha-097312: {Name:mka98570d4b4becad22300323f1f88e64743eec3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 13:05:46.795113  688055 start.go:364] duration metric: took 32.448µs to acquireMachinesLock for "ha-097312"
	I0923 13:05:46.795129  688055 start.go:96] Skipping create...Using existing machine configuration
	I0923 13:05:46.795135  688055 fix.go:54] fixHost starting: 
	I0923 13:05:46.795414  688055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:05:46.795450  688055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:05:46.810871  688055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42769
	I0923 13:05:46.811360  688055 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:05:46.811862  688055 main.go:141] libmachine: Using API Version  1
	I0923 13:05:46.811886  688055 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:05:46.812227  688055 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:05:46.812448  688055 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 13:05:46.812616  688055 main.go:141] libmachine: (ha-097312) Calling .GetState
	I0923 13:05:46.814211  688055 fix.go:112] recreateIfNeeded on ha-097312: state=Running err=<nil>
	W0923 13:05:46.814247  688055 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 13:05:46.816590  688055 out.go:177] * Updating the running kvm2 "ha-097312" VM ...
	I0923 13:05:46.818023  688055 machine.go:93] provisionDockerMachine start ...
	I0923 13:05:46.818053  688055 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 13:05:46.818354  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 13:05:46.821479  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:46.822026  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:05:46.822077  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:46.822379  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 13:05:46.822574  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:05:46.822735  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:05:46.822880  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 13:05:46.823173  688055 main.go:141] libmachine: Using SSH client type: native
	I0923 13:05:46.823472  688055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 13:05:46.823488  688055 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 13:05:46.939540  688055 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-097312
	
	I0923 13:05:46.939574  688055 main.go:141] libmachine: (ha-097312) Calling .GetMachineName
	I0923 13:05:46.939836  688055 buildroot.go:166] provisioning hostname "ha-097312"
	I0923 13:05:46.939872  688055 main.go:141] libmachine: (ha-097312) Calling .GetMachineName
	I0923 13:05:46.940043  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 13:05:46.943429  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:46.943929  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:05:46.943968  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:46.944171  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 13:05:46.944386  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:05:46.944599  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:05:46.944731  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 13:05:46.944912  688055 main.go:141] libmachine: Using SSH client type: native
	I0923 13:05:46.945087  688055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 13:05:46.945102  688055 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-097312 && echo "ha-097312" | sudo tee /etc/hostname
	I0923 13:05:47.068753  688055 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-097312
	
	I0923 13:05:47.068784  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 13:05:47.071493  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:47.071935  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:05:47.071970  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:47.072123  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 13:05:47.072333  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:05:47.072531  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:05:47.072685  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 13:05:47.072841  688055 main.go:141] libmachine: Using SSH client type: native
	I0923 13:05:47.073034  688055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 13:05:47.073056  688055 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-097312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-097312/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-097312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 13:05:47.186928  688055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 13:05:47.186966  688055 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19690-662205/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-662205/.minikube}
	I0923 13:05:47.186992  688055 buildroot.go:174] setting up certificates
	I0923 13:05:47.187004  688055 provision.go:84] configureAuth start
	I0923 13:05:47.187015  688055 main.go:141] libmachine: (ha-097312) Calling .GetMachineName
	I0923 13:05:47.187278  688055 main.go:141] libmachine: (ha-097312) Calling .GetIP
	I0923 13:05:47.190282  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:47.190871  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:05:47.190901  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:47.191067  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 13:05:47.193413  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:47.193723  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:05:47.193744  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:47.193904  688055 provision.go:143] copyHostCerts
	I0923 13:05:47.193956  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 13:05:47.194007  688055 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem, removing ...
	I0923 13:05:47.194028  688055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 13:05:47.194114  688055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem (1082 bytes)
	I0923 13:05:47.194247  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 13:05:47.194275  688055 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem, removing ...
	I0923 13:05:47.194284  688055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 13:05:47.194324  688055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem (1123 bytes)
	I0923 13:05:47.194400  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 13:05:47.194435  688055 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem, removing ...
	I0923 13:05:47.194444  688055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 13:05:47.194478  688055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem (1675 bytes)
	I0923 13:05:47.194546  688055 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem org=jenkins.ha-097312 san=[127.0.0.1 192.168.39.160 ha-097312 localhost minikube]
	I0923 13:05:47.574760  688055 provision.go:177] copyRemoteCerts
	I0923 13:05:47.574841  688055 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 13:05:47.574873  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 13:05:47.578017  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:47.578381  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:05:47.578422  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:47.578653  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 13:05:47.578895  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:05:47.579115  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 13:05:47.579254  688055 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 13:05:47.664339  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 13:05:47.664419  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 13:05:47.693346  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 13:05:47.693424  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0923 13:05:47.718325  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 13:05:47.718418  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 13:05:47.746634  688055 provision.go:87] duration metric: took 559.615125ms to configureAuth
	I0923 13:05:47.746668  688055 buildroot.go:189] setting minikube options for container-runtime
	I0923 13:05:47.746936  688055 config.go:182] Loaded profile config "ha-097312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:05:47.747044  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 13:05:47.750584  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:47.751120  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:05:47.751154  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:05:47.751355  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 13:05:47.751570  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:05:47.751747  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:05:47.751964  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 13:05:47.752155  688055 main.go:141] libmachine: Using SSH client type: native
	I0923 13:05:47.752372  688055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 13:05:47.752390  688055 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 13:07:18.498816  688055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 13:07:18.498846  688055 machine.go:96] duration metric: took 1m31.680804191s to provisionDockerMachine
	I0923 13:07:18.498861  688055 start.go:293] postStartSetup for "ha-097312" (driver="kvm2")
	I0923 13:07:18.498877  688055 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 13:07:18.498901  688055 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 13:07:18.499333  688055 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 13:07:18.499366  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 13:07:18.502894  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:18.503364  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:07:18.503392  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:18.503605  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 13:07:18.503809  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:07:18.503960  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 13:07:18.504118  688055 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 13:07:18.589695  688055 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 13:07:18.594319  688055 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 13:07:18.594355  688055 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/addons for local assets ...
	I0923 13:07:18.594430  688055 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/files for local assets ...
	I0923 13:07:18.594535  688055 filesync.go:149] local asset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> 6694472.pem in /etc/ssl/certs
	I0923 13:07:18.594550  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> /etc/ssl/certs/6694472.pem
	I0923 13:07:18.594645  688055 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 13:07:18.604340  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 13:07:18.630044  688055 start.go:296] duration metric: took 131.165846ms for postStartSetup
	I0923 13:07:18.630098  688055 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 13:07:18.630455  688055 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0923 13:07:18.630495  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 13:07:18.633680  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:18.634256  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:07:18.634297  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:18.634399  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 13:07:18.634680  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:07:18.634838  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 13:07:18.634961  688055 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	W0923 13:07:18.715817  688055 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0923 13:07:18.715843  688055 fix.go:56] duration metric: took 1m31.920709459s for fixHost
	I0923 13:07:18.715872  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 13:07:18.718515  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:18.718870  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:07:18.718908  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:18.719046  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 13:07:18.719268  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:07:18.719471  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:07:18.719615  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 13:07:18.719891  688055 main.go:141] libmachine: Using SSH client type: native
	I0923 13:07:18.720170  688055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0923 13:07:18.720196  688055 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 13:07:18.826507  688055 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727096838.789442525
	
	I0923 13:07:18.826533  688055 fix.go:216] guest clock: 1727096838.789442525
	I0923 13:07:18.826542  688055 fix.go:229] Guest: 2024-09-23 13:07:18.789442525 +0000 UTC Remote: 2024-09-23 13:07:18.715851736 +0000 UTC m=+92.061293391 (delta=73.590789ms)
	I0923 13:07:18.826595  688055 fix.go:200] guest clock delta is within tolerance: 73.590789ms
	I0923 13:07:18.826603  688055 start.go:83] releasing machines lock for "ha-097312", held for 1m32.031479619s
	I0923 13:07:18.826629  688055 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 13:07:18.826922  688055 main.go:141] libmachine: (ha-097312) Calling .GetIP
	I0923 13:07:18.829600  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:18.830006  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:07:18.830032  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:18.830242  688055 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 13:07:18.830800  688055 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 13:07:18.830973  688055 main.go:141] libmachine: (ha-097312) Calling .DriverName
	I0923 13:07:18.831073  688055 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 13:07:18.831139  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 13:07:18.831174  688055 ssh_runner.go:195] Run: cat /version.json
	I0923 13:07:18.831196  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHHostname
	I0923 13:07:18.833936  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:18.834188  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:18.834466  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:07:18.834493  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:18.834662  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 13:07:18.834757  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:07:18.834784  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:18.834847  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:07:18.834929  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHPort
	I0923 13:07:18.834999  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 13:07:18.835055  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHKeyPath
	I0923 13:07:18.835150  688055 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 13:07:18.835173  688055 main.go:141] libmachine: (ha-097312) Calling .GetSSHUsername
	I0923 13:07:18.835316  688055 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/ha-097312/id_rsa Username:docker}
	I0923 13:07:18.911409  688055 ssh_runner.go:195] Run: systemctl --version
	I0923 13:07:18.955390  688055 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 13:07:19.123007  688055 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 13:07:19.128775  688055 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 13:07:19.128857  688055 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:07:19.137970  688055 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 13:07:19.137995  688055 start.go:495] detecting cgroup driver to use...
	I0923 13:07:19.138078  688055 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 13:07:19.155197  688055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:07:19.169620  688055 docker.go:217] disabling cri-docker service (if available) ...
	I0923 13:07:19.169707  688055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 13:07:19.183861  688055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 13:07:19.198223  688055 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 13:07:19.353120  688055 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 13:07:19.496371  688055 docker.go:233] disabling docker service ...
	I0923 13:07:19.496454  688055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 13:07:19.512315  688055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 13:07:19.525784  688055 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 13:07:19.674143  688055 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 13:07:19.826027  688055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 13:07:19.841490  688055 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:07:19.861706  688055 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 13:07:19.861792  688055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:07:19.872636  688055 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 13:07:19.872726  688055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:07:19.883461  688055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:07:19.894266  688055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:07:19.904936  688055 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 13:07:19.915977  688055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:07:19.926513  688055 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:07:19.938462  688055 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:07:19.948895  688055 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 13:07:19.959121  688055 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 13:07:19.969403  688055 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:07:20.116581  688055 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 13:07:22.553549  688055 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.436893651s)
	I0923 13:07:22.553606  688055 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 13:07:22.553659  688055 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 13:07:22.558509  688055 start.go:563] Will wait 60s for crictl version
	I0923 13:07:22.558587  688055 ssh_runner.go:195] Run: which crictl
	I0923 13:07:22.562331  688055 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 13:07:22.608688  688055 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 13:07:22.608780  688055 ssh_runner.go:195] Run: crio --version
	I0923 13:07:22.636010  688055 ssh_runner.go:195] Run: crio --version
	I0923 13:07:22.666425  688055 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 13:07:22.668395  688055 main.go:141] libmachine: (ha-097312) Calling .GetIP
	I0923 13:07:22.671648  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:22.672113  688055 main.go:141] libmachine: (ha-097312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:7f:c5", ip: ""} in network mk-ha-097312: {Iface:virbr1 ExpiryTime:2024-09-23 13:56:36 +0000 UTC Type:0 Mac:52:54:00:06:7f:c5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-097312 Clientid:01:52:54:00:06:7f:c5}
	I0923 13:07:22.672135  688055 main.go:141] libmachine: (ha-097312) DBG | domain ha-097312 has defined IP address 192.168.39.160 and MAC address 52:54:00:06:7f:c5 in network mk-ha-097312
	I0923 13:07:22.672454  688055 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 13:07:22.677484  688055 kubeadm.go:883] updating cluster {Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.20 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 13:07:22.677664  688055 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 13:07:22.677710  688055 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 13:07:22.721704  688055 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 13:07:22.721740  688055 crio.go:433] Images already preloaded, skipping extraction
	I0923 13:07:22.721809  688055 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 13:07:22.756651  688055 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 13:07:22.756689  688055 cache_images.go:84] Images are preloaded, skipping loading
	I0923 13:07:22.756705  688055 kubeadm.go:934] updating node { 192.168.39.160 8443 v1.31.1 crio true true} ...
	I0923 13:07:22.756846  688055 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-097312 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 13:07:22.756921  688055 ssh_runner.go:195] Run: crio config
	I0923 13:07:22.805465  688055 cni.go:84] Creating CNI manager for ""
	I0923 13:07:22.805500  688055 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0923 13:07:22.805516  688055 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 13:07:22.805541  688055 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.160 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-097312 NodeName:ha-097312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.160"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.160 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 13:07:22.805687  688055 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.160
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-097312"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.160
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.160"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 13:07:22.805710  688055 kube-vip.go:115] generating kube-vip config ...
	I0923 13:07:22.805752  688055 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 13:07:22.817117  688055 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 13:07:22.817278  688055 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0923 13:07:22.817357  688055 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 13:07:22.827136  688055 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 13:07:22.827221  688055 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0923 13:07:22.836641  688055 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0923 13:07:22.853973  688055 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 13:07:22.870952  688055 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0923 13:07:22.887848  688055 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0923 13:07:22.905058  688055 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0923 13:07:22.910298  688055 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:07:23.054329  688055 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:07:23.069291  688055 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312 for IP: 192.168.39.160
	I0923 13:07:23.069327  688055 certs.go:194] generating shared ca certs ...
	I0923 13:07:23.069347  688055 certs.go:226] acquiring lock for ca certs: {Name:mk5f47b34d40554f07f6507fea971236e4735d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:07:23.069577  688055 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key
	I0923 13:07:23.069635  688055 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key
	I0923 13:07:23.069647  688055 certs.go:256] generating profile certs ...
	I0923 13:07:23.069805  688055 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/client.key
	I0923 13:07:23.069864  688055 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.f2bacd8f
	I0923 13:07:23.069884  688055 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.f2bacd8f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.160 192.168.39.214 192.168.39.174 192.168.39.254]
	I0923 13:07:23.560111  688055 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.f2bacd8f ...
	I0923 13:07:23.560148  688055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.f2bacd8f: {Name:mkba1a7ff7fcdf029a4874e87d6a34c95699d0fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:07:23.560336  688055 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.f2bacd8f ...
	I0923 13:07:23.560349  688055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.f2bacd8f: {Name:mkd072123cc33301ff212141ab17814b18bb44e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:07:23.560415  688055 certs.go:381] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt.f2bacd8f -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt
	I0923 13:07:23.560606  688055 certs.go:385] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key.f2bacd8f -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key
	I0923 13:07:23.560757  688055 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key
	I0923 13:07:23.560774  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 13:07:23.560787  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 13:07:23.560798  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 13:07:23.560819  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 13:07:23.560831  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 13:07:23.560841  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 13:07:23.560854  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 13:07:23.560869  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 13:07:23.560921  688055 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem (1338 bytes)
	W0923 13:07:23.560948  688055 certs.go:480] ignoring /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447_empty.pem, impossibly tiny 0 bytes
	I0923 13:07:23.560957  688055 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 13:07:23.560982  688055 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem (1082 bytes)
	I0923 13:07:23.561003  688055 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem (1123 bytes)
	I0923 13:07:23.561023  688055 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem (1675 bytes)
	I0923 13:07:23.561059  688055 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 13:07:23.561085  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem -> /usr/share/ca-certificates/669447.pem
	I0923 13:07:23.561102  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> /usr/share/ca-certificates/6694472.pem
	I0923 13:07:23.561120  688055 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:07:23.561741  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 13:07:23.600445  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 13:07:23.639005  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 13:07:23.672909  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 13:07:23.698256  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0923 13:07:23.722508  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 13:07:23.749014  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 13:07:23.774998  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/ha-097312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 13:07:23.799683  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem --> /usr/share/ca-certificates/669447.pem (1338 bytes)
	I0923 13:07:23.823709  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /usr/share/ca-certificates/6694472.pem (1708 bytes)
	I0923 13:07:23.847584  688055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 13:07:23.870761  688055 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 13:07:23.888134  688055 ssh_runner.go:195] Run: openssl version
	I0923 13:07:23.893955  688055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6694472.pem && ln -fs /usr/share/ca-certificates/6694472.pem /etc/ssl/certs/6694472.pem"
	I0923 13:07:23.904614  688055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6694472.pem
	I0923 13:07:23.908934  688055 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 12:47 /usr/share/ca-certificates/6694472.pem
	I0923 13:07:23.908998  688055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6694472.pem
	I0923 13:07:23.914676  688055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6694472.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 13:07:23.925108  688055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 13:07:23.936456  688055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:07:23.941455  688055 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 12:28 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:07:23.941528  688055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:07:23.947368  688055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 13:07:23.957758  688055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669447.pem && ln -fs /usr/share/ca-certificates/669447.pem /etc/ssl/certs/669447.pem"
	I0923 13:07:23.969258  688055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669447.pem
	I0923 13:07:23.974108  688055 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 12:47 /usr/share/ca-certificates/669447.pem
	I0923 13:07:23.974202  688055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669447.pem
	I0923 13:07:23.980124  688055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/669447.pem /etc/ssl/certs/51391683.0"
	I0923 13:07:23.990259  688055 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 13:07:23.994977  688055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 13:07:24.001065  688055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 13:07:24.007047  688055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 13:07:24.012797  688055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 13:07:24.018767  688055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 13:07:24.024550  688055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 13:07:24.030569  688055 kubeadm.go:392] StartCluster: {Name:ha-097312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-097312 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.20 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:07:24.030702  688055 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 13:07:24.030752  688055 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 13:07:24.069487  688055 cri.go:89] found id: "7524fbdf9249559c4c6f8270174b2f08e4a4e1df3189f4130ee8c96ca02c3a6f"
	I0923 13:07:24.069519  688055 cri.go:89] found id: "734f0cb5eab507be54ea52dbb406b37e87e0dbd8c959f3c135081aae7fc73520"
	I0923 13:07:24.069524  688055 cri.go:89] found id: "65ce8ee9790645dba54c36e9ba009961df64527fd59e20c866265404b97342ad"
	I0923 13:07:24.069528  688055 cri.go:89] found id: "7322669ed5e0c54ea12545610d9e118abd4651267c1bcf8718d21a45f2a03f5e"
	I0923 13:07:24.069531  688055 cri.go:89] found id: "6494b72ca963ec5a21179322ce5a1a3cd2ecf6063d12290ea8c06659ede25828"
	I0923 13:07:24.069535  688055 cri.go:89] found id: "070d45bce8ff98c35a7d8c06328c902bd260bbcd49c6d8b65acf5f2fe3670f05"
	I0923 13:07:24.069538  688055 cri.go:89] found id: "cead05960724ef0a7c164689c7f077c5173bf75483e09a02ea44bf3b5dde8cab"
	I0923 13:07:24.069541  688055 cri.go:89] found id: "03670fd92c8a80c9d88e88b722428ce8ea7ed15a32a25c8c4c948685c15fe41c"
	I0923 13:07:24.069544  688055 cri.go:89] found id: "37b6ad938698e107c07b01a67dcc4f6f6f2895a6b2ddc7a269056adab117c0ce"
	I0923 13:07:24.069552  688055 cri.go:89] found id: "e5095373416a8e45324449515c2fa18882a4b643648236860681c27f7f589bdb"
	I0923 13:07:24.069572  688055 cri.go:89] found id: "9bfbdbe2c35f63b185f28992c717601392287e693216d7332cfd0b4b6597c8ad"
	I0923 13:07:24.069575  688055 cri.go:89] found id: "5c9e8fb5e944bc800446956248067c039e5c452de2651adf100841c5f062a431"
	I0923 13:07:24.069578  688055 cri.go:89] found id: "1c28bf3f4d80d4048804c687d1cec38aff92ff01ac7556fbe59fd2c73324b333"
	I0923 13:07:24.069580  688055 cri.go:89] found id: "476ad705f89683694506883a4ac379c2339d6097875e3a88c66a078cec041492"
	I0923 13:07:24.069586  688055 cri.go:89] found id: ""
	I0923 13:07:24.069646  688055 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 23 13:12:59 ha-097312 crio[3615]: time="2024-09-23 13:12:59.761592628Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097179761569444,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=22368758-26ef-4c5d-9e3d-0b91e66b1354 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:12:59 ha-097312 crio[3615]: time="2024-09-23 13:12:59.762356889Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd85b1fb-8d8c-4a1b-bcea-ebce74c264f8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:12:59 ha-097312 crio[3615]: time="2024-09-23 13:12:59.762418657Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd85b1fb-8d8c-4a1b-bcea-ebce74c264f8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:12:59 ha-097312 crio[3615]: time="2024-09-23 13:12:59.763044954Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d27e0a2d3851698bf14a74ab80a5aa5c92e2b29d0e3e5daf878fedaa77a028b,PodSandboxId:1208cacfef830900c03332e3e25064f9922051e5f615eed5f353e9839bca7a0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727096926161834491,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:327bfbcf6b79a16f1a5d0c94377815fc5fbb5bc82e544e68403e7ec0e90448e8,PodSandboxId:bcf3a61d4cd3d9b55ee351f4c648a07b5efa211abada5ab0831b1bee698ab227,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727096897149733438,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f3d21af63b5abeda02040f268e6cc8e42b9f5c0d833e8e462586290e7f1d4c6,PodSandboxId:5647b2efd9d715c3975ffa772999aea10dddd8c0ef929e1079aa12a7c3743c83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727096885162755257,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e91ef888f8ec955bef6dc9006e9b04cd7f0c520501780bf227a51838b9b055d5,PodSandboxId:1208cacfef830900c03332e3e25064f9922051e5f615eed5f353e9839bca7a0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727096884150521741,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5776f6ad95118b2c81dead9b92b71a822195a4bef5adbf5871dcef1697e6d5a6,PodSandboxId:91947e1a82d06b511d7c18ef8debffb602cc4a5086f7adf39c515c6c7780dfe4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727096883437498706,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31662037f5b073826dbc31fa11734016648662a603866125155e58446d4c73fe,PodSandboxId:62c7b6d1bfb3fcf355ef1262dbf0a6981964acecb49c05105bf7368bae4ee0f2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727096862113200841,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 439831b6eefde7ddc923373d885892d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6527aca4afc7a32189ae29950d639e34564886f591210b00866727f72ecf2617,PodSandboxId:dd9ae27e8638bebbdcb54e62019125f61b83446a30ad75f1e242f56744544025,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727096850427518493,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:fddebc96422aa20750bd4deb2fa7a71b609a0b73820282e5572365906bad733d,PodSandboxId:190c2732e8a28ab3ed97d3ff86abb432daa7bfa8bfbeed10c4fbb8fea19647cb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727096850135206176,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcc4e39b
b9f3f77cad4a321dfd137c68026104431291fca9781d2bc69c01eda2,PodSandboxId:757baf6c9026cc6a6c35376a447df19610b1c547d345e66b3943e826e53d744b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096850108299496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0267858752e0468b892f98165ea7b1e17a2afde6ca05faccacf5ab35984ae965,PodSandboxId:694418cca7eb400f2a4cb270d0a9f891885c67d0bff9eeba86619473c970f3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727096850189041185,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c4427bd859a561476ab926494c2f6a0c2babe6ae47fd7b07d495a1ccb47adbb,PodSandboxId:c1b4e30bea3c71aa0ef8865f692497339ab86e89f3a99aa3e70cd62bf3002a45,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727096850027124073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:063ee6e5cb4852bcf99e81658a40b6a882427bed47d9cff993d0a1d51f047fab,PodSandboxId:bcf3a61d4cd3d9b55ee351f4c648a07b5efa211abada5ab0831b1bee698ab227,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727096849904481582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bb38d637855f4ef3d09f67d9d173fa2f585dfbf0fd48555c4d80a36d7a8096,PodSandboxId:5647b2efd9d715c3975ffa772999aea10dddd8c0ef929e1079aa12a7c3743c83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727096849802494969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7524fbdf9249559c4c6f8270174b2f08e4a4e1df3189f4130ee8c96ca02c3a6f,PodSandboxId:10a49ceedb6d63dfd25b55150d5d26085608a48663bd0221079001a1cea652a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096843509165688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c8b3d3e1c9604dd8d7d45c15c2a91a759a62f04a047e5626d57a757a396bd4b,PodSandboxId:01a99cef826dda6f2b65d379c041e96505aa2085b58dd4630a3ae2c0052d503b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727096387328890914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6494b72ca963ec5a21179322ce5a1a3cd2ecf6063d12290ea8c06659ede25828,PodSandboxId:09f40d2b506132af296453dc4125d2ff70d789a87f1da351ae25a90c863e1c5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727096240450572434,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cead05960724ef0a7c164689c7f077c5173bf75483e09a02ea44bf3b5dde8cab,PodSandboxId:d6346e81a93e3ab149256d0f37fd69af6c44f91e6e6662b3720a7bd343554d66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727096240372228978,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03670fd92c8a80c9d88e88b722428ce8ea7ed15a32a25c8c4c948685c15fe41c,PodSandboxId:fa074de98ab0bb7558595bb7900fab097f2fa4cf091ae0c9ed5fd5c899cc2044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727096228373784218,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b6ad938698e107c07b01a67dcc4f6f6f2895a6b2ddc7a269056adab117c0ce,PodSandboxId:8efd7c52e41eb6dd5b30df6dc0b133cb2ffabe08abf473da0e79edcf137bc745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727096228199442884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfbdbe2c35f63b185f28992c717601392287e693216d7332cfd0b4b6597c8ad,PodSandboxId:46a49b5018b58cc60ab2c080f685d00c187e33e4c7790af775ed5baf71aefdca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727096215629506566,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c9e8fb5e944bc800446956248067c039e5c452de2651adf100841c5f062a431,PodSandboxId:e4cdc1cb583f42c1cf64e136ebe20075107963fc13da9144c568b67897e7e8a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1727096215612604563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dd85b1fb-8d8c-4a1b-bcea-ebce74c264f8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:12:59 ha-097312 crio[3615]: time="2024-09-23 13:12:59.808277352Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5dc45346-edbb-4b01-b753-0f3596a42408 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:12:59 ha-097312 crio[3615]: time="2024-09-23 13:12:59.808583981Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5dc45346-edbb-4b01-b753-0f3596a42408 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:12:59 ha-097312 crio[3615]: time="2024-09-23 13:12:59.809848876Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e440061b-18bd-4d8b-96a8-26660888c515 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:12:59 ha-097312 crio[3615]: time="2024-09-23 13:12:59.810270398Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097179810247911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e440061b-18bd-4d8b-96a8-26660888c515 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:12:59 ha-097312 crio[3615]: time="2024-09-23 13:12:59.810792975Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0db348df-5c31-4d29-a495-522808571354 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:12:59 ha-097312 crio[3615]: time="2024-09-23 13:12:59.810849065Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0db348df-5c31-4d29-a495-522808571354 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:12:59 ha-097312 crio[3615]: time="2024-09-23 13:12:59.811276094Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d27e0a2d3851698bf14a74ab80a5aa5c92e2b29d0e3e5daf878fedaa77a028b,PodSandboxId:1208cacfef830900c03332e3e25064f9922051e5f615eed5f353e9839bca7a0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727096926161834491,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:327bfbcf6b79a16f1a5d0c94377815fc5fbb5bc82e544e68403e7ec0e90448e8,PodSandboxId:bcf3a61d4cd3d9b55ee351f4c648a07b5efa211abada5ab0831b1bee698ab227,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727096897149733438,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f3d21af63b5abeda02040f268e6cc8e42b9f5c0d833e8e462586290e7f1d4c6,PodSandboxId:5647b2efd9d715c3975ffa772999aea10dddd8c0ef929e1079aa12a7c3743c83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727096885162755257,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e91ef888f8ec955bef6dc9006e9b04cd7f0c520501780bf227a51838b9b055d5,PodSandboxId:1208cacfef830900c03332e3e25064f9922051e5f615eed5f353e9839bca7a0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727096884150521741,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5776f6ad95118b2c81dead9b92b71a822195a4bef5adbf5871dcef1697e6d5a6,PodSandboxId:91947e1a82d06b511d7c18ef8debffb602cc4a5086f7adf39c515c6c7780dfe4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727096883437498706,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31662037f5b073826dbc31fa11734016648662a603866125155e58446d4c73fe,PodSandboxId:62c7b6d1bfb3fcf355ef1262dbf0a6981964acecb49c05105bf7368bae4ee0f2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727096862113200841,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 439831b6eefde7ddc923373d885892d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6527aca4afc7a32189ae29950d639e34564886f591210b00866727f72ecf2617,PodSandboxId:dd9ae27e8638bebbdcb54e62019125f61b83446a30ad75f1e242f56744544025,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727096850427518493,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:fddebc96422aa20750bd4deb2fa7a71b609a0b73820282e5572365906bad733d,PodSandboxId:190c2732e8a28ab3ed97d3ff86abb432daa7bfa8bfbeed10c4fbb8fea19647cb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727096850135206176,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcc4e39b
b9f3f77cad4a321dfd137c68026104431291fca9781d2bc69c01eda2,PodSandboxId:757baf6c9026cc6a6c35376a447df19610b1c547d345e66b3943e826e53d744b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096850108299496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0267858752e0468b892f98165ea7b1e17a2afde6ca05faccacf5ab35984ae965,PodSandboxId:694418cca7eb400f2a4cb270d0a9f891885c67d0bff9eeba86619473c970f3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727096850189041185,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c4427bd859a561476ab926494c2f6a0c2babe6ae47fd7b07d495a1ccb47adbb,PodSandboxId:c1b4e30bea3c71aa0ef8865f692497339ab86e89f3a99aa3e70cd62bf3002a45,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727096850027124073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:063ee6e5cb4852bcf99e81658a40b6a882427bed47d9cff993d0a1d51f047fab,PodSandboxId:bcf3a61d4cd3d9b55ee351f4c648a07b5efa211abada5ab0831b1bee698ab227,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727096849904481582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bb38d637855f4ef3d09f67d9d173fa2f585dfbf0fd48555c4d80a36d7a8096,PodSandboxId:5647b2efd9d715c3975ffa772999aea10dddd8c0ef929e1079aa12a7c3743c83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727096849802494969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7524fbdf9249559c4c6f8270174b2f08e4a4e1df3189f4130ee8c96ca02c3a6f,PodSandboxId:10a49ceedb6d63dfd25b55150d5d26085608a48663bd0221079001a1cea652a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096843509165688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c8b3d3e1c9604dd8d7d45c15c2a91a759a62f04a047e5626d57a757a396bd4b,PodSandboxId:01a99cef826dda6f2b65d379c041e96505aa2085b58dd4630a3ae2c0052d503b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727096387328890914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6494b72ca963ec5a21179322ce5a1a3cd2ecf6063d12290ea8c06659ede25828,PodSandboxId:09f40d2b506132af296453dc4125d2ff70d789a87f1da351ae25a90c863e1c5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727096240450572434,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cead05960724ef0a7c164689c7f077c5173bf75483e09a02ea44bf3b5dde8cab,PodSandboxId:d6346e81a93e3ab149256d0f37fd69af6c44f91e6e6662b3720a7bd343554d66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727096240372228978,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03670fd92c8a80c9d88e88b722428ce8ea7ed15a32a25c8c4c948685c15fe41c,PodSandboxId:fa074de98ab0bb7558595bb7900fab097f2fa4cf091ae0c9ed5fd5c899cc2044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727096228373784218,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b6ad938698e107c07b01a67dcc4f6f6f2895a6b2ddc7a269056adab117c0ce,PodSandboxId:8efd7c52e41eb6dd5b30df6dc0b133cb2ffabe08abf473da0e79edcf137bc745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727096228199442884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfbdbe2c35f63b185f28992c717601392287e693216d7332cfd0b4b6597c8ad,PodSandboxId:46a49b5018b58cc60ab2c080f685d00c187e33e4c7790af775ed5baf71aefdca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727096215629506566,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c9e8fb5e944bc800446956248067c039e5c452de2651adf100841c5f062a431,PodSandboxId:e4cdc1cb583f42c1cf64e136ebe20075107963fc13da9144c568b67897e7e8a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1727096215612604563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0db348df-5c31-4d29-a495-522808571354 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:12:59 ha-097312 crio[3615]: time="2024-09-23 13:12:59.857218455Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3721a4a6-261a-44f4-a5a9-227950671771 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:12:59 ha-097312 crio[3615]: time="2024-09-23 13:12:59.857295701Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3721a4a6-261a-44f4-a5a9-227950671771 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:12:59 ha-097312 crio[3615]: time="2024-09-23 13:12:59.859774518Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b62d06c3-140e-4ec1-aad6-f69cdce93fae name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:12:59 ha-097312 crio[3615]: time="2024-09-23 13:12:59.860272287Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097179860243433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b62d06c3-140e-4ec1-aad6-f69cdce93fae name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:12:59 ha-097312 crio[3615]: time="2024-09-23 13:12:59.860910717Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5f7065a-b9e4-4446-8f1f-d9e43d94c613 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:12:59 ha-097312 crio[3615]: time="2024-09-23 13:12:59.860989266Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5f7065a-b9e4-4446-8f1f-d9e43d94c613 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:12:59 ha-097312 crio[3615]: time="2024-09-23 13:12:59.861400884Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d27e0a2d3851698bf14a74ab80a5aa5c92e2b29d0e3e5daf878fedaa77a028b,PodSandboxId:1208cacfef830900c03332e3e25064f9922051e5f615eed5f353e9839bca7a0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727096926161834491,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:327bfbcf6b79a16f1a5d0c94377815fc5fbb5bc82e544e68403e7ec0e90448e8,PodSandboxId:bcf3a61d4cd3d9b55ee351f4c648a07b5efa211abada5ab0831b1bee698ab227,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727096897149733438,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f3d21af63b5abeda02040f268e6cc8e42b9f5c0d833e8e462586290e7f1d4c6,PodSandboxId:5647b2efd9d715c3975ffa772999aea10dddd8c0ef929e1079aa12a7c3743c83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727096885162755257,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e91ef888f8ec955bef6dc9006e9b04cd7f0c520501780bf227a51838b9b055d5,PodSandboxId:1208cacfef830900c03332e3e25064f9922051e5f615eed5f353e9839bca7a0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727096884150521741,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5776f6ad95118b2c81dead9b92b71a822195a4bef5adbf5871dcef1697e6d5a6,PodSandboxId:91947e1a82d06b511d7c18ef8debffb602cc4a5086f7adf39c515c6c7780dfe4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727096883437498706,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31662037f5b073826dbc31fa11734016648662a603866125155e58446d4c73fe,PodSandboxId:62c7b6d1bfb3fcf355ef1262dbf0a6981964acecb49c05105bf7368bae4ee0f2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727096862113200841,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 439831b6eefde7ddc923373d885892d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6527aca4afc7a32189ae29950d639e34564886f591210b00866727f72ecf2617,PodSandboxId:dd9ae27e8638bebbdcb54e62019125f61b83446a30ad75f1e242f56744544025,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727096850427518493,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:fddebc96422aa20750bd4deb2fa7a71b609a0b73820282e5572365906bad733d,PodSandboxId:190c2732e8a28ab3ed97d3ff86abb432daa7bfa8bfbeed10c4fbb8fea19647cb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727096850135206176,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcc4e39b
b9f3f77cad4a321dfd137c68026104431291fca9781d2bc69c01eda2,PodSandboxId:757baf6c9026cc6a6c35376a447df19610b1c547d345e66b3943e826e53d744b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096850108299496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0267858752e0468b892f98165ea7b1e17a2afde6ca05faccacf5ab35984ae965,PodSandboxId:694418cca7eb400f2a4cb270d0a9f891885c67d0bff9eeba86619473c970f3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727096850189041185,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c4427bd859a561476ab926494c2f6a0c2babe6ae47fd7b07d495a1ccb47adbb,PodSandboxId:c1b4e30bea3c71aa0ef8865f692497339ab86e89f3a99aa3e70cd62bf3002a45,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727096850027124073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:063ee6e5cb4852bcf99e81658a40b6a882427bed47d9cff993d0a1d51f047fab,PodSandboxId:bcf3a61d4cd3d9b55ee351f4c648a07b5efa211abada5ab0831b1bee698ab227,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727096849904481582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bb38d637855f4ef3d09f67d9d173fa2f585dfbf0fd48555c4d80a36d7a8096,PodSandboxId:5647b2efd9d715c3975ffa772999aea10dddd8c0ef929e1079aa12a7c3743c83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727096849802494969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7524fbdf9249559c4c6f8270174b2f08e4a4e1df3189f4130ee8c96ca02c3a6f,PodSandboxId:10a49ceedb6d63dfd25b55150d5d26085608a48663bd0221079001a1cea652a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096843509165688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c8b3d3e1c9604dd8d7d45c15c2a91a759a62f04a047e5626d57a757a396bd4b,PodSandboxId:01a99cef826dda6f2b65d379c041e96505aa2085b58dd4630a3ae2c0052d503b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727096387328890914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6494b72ca963ec5a21179322ce5a1a3cd2ecf6063d12290ea8c06659ede25828,PodSandboxId:09f40d2b506132af296453dc4125d2ff70d789a87f1da351ae25a90c863e1c5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727096240450572434,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cead05960724ef0a7c164689c7f077c5173bf75483e09a02ea44bf3b5dde8cab,PodSandboxId:d6346e81a93e3ab149256d0f37fd69af6c44f91e6e6662b3720a7bd343554d66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727096240372228978,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03670fd92c8a80c9d88e88b722428ce8ea7ed15a32a25c8c4c948685c15fe41c,PodSandboxId:fa074de98ab0bb7558595bb7900fab097f2fa4cf091ae0c9ed5fd5c899cc2044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727096228373784218,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b6ad938698e107c07b01a67dcc4f6f6f2895a6b2ddc7a269056adab117c0ce,PodSandboxId:8efd7c52e41eb6dd5b30df6dc0b133cb2ffabe08abf473da0e79edcf137bc745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727096228199442884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfbdbe2c35f63b185f28992c717601392287e693216d7332cfd0b4b6597c8ad,PodSandboxId:46a49b5018b58cc60ab2c080f685d00c187e33e4c7790af775ed5baf71aefdca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727096215629506566,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c9e8fb5e944bc800446956248067c039e5c452de2651adf100841c5f062a431,PodSandboxId:e4cdc1cb583f42c1cf64e136ebe20075107963fc13da9144c568b67897e7e8a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1727096215612604563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c5f7065a-b9e4-4446-8f1f-d9e43d94c613 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:12:59 ha-097312 crio[3615]: time="2024-09-23 13:12:59.904300516Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0b785e20-7c34-464d-9b52-4b9c85cb8bae name=/runtime.v1.RuntimeService/Version
	Sep 23 13:12:59 ha-097312 crio[3615]: time="2024-09-23 13:12:59.904384808Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0b785e20-7c34-464d-9b52-4b9c85cb8bae name=/runtime.v1.RuntimeService/Version
	Sep 23 13:12:59 ha-097312 crio[3615]: time="2024-09-23 13:12:59.905898931Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f862fcbe-1134-4c33-ac92-0a71b94bfb04 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:12:59 ha-097312 crio[3615]: time="2024-09-23 13:12:59.906365755Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097179906340574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f862fcbe-1134-4c33-ac92-0a71b94bfb04 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:12:59 ha-097312 crio[3615]: time="2024-09-23 13:12:59.906982889Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=38103d78-3d91-4707-8908-7f38579825a7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:12:59 ha-097312 crio[3615]: time="2024-09-23 13:12:59.907066558Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=38103d78-3d91-4707-8908-7f38579825a7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:12:59 ha-097312 crio[3615]: time="2024-09-23 13:12:59.907511138Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d27e0a2d3851698bf14a74ab80a5aa5c92e2b29d0e3e5daf878fedaa77a028b,PodSandboxId:1208cacfef830900c03332e3e25064f9922051e5f615eed5f353e9839bca7a0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727096926161834491,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:327bfbcf6b79a16f1a5d0c94377815fc5fbb5bc82e544e68403e7ec0e90448e8,PodSandboxId:bcf3a61d4cd3d9b55ee351f4c648a07b5efa211abada5ab0831b1bee698ab227,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727096897149733438,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f3d21af63b5abeda02040f268e6cc8e42b9f5c0d833e8e462586290e7f1d4c6,PodSandboxId:5647b2efd9d715c3975ffa772999aea10dddd8c0ef929e1079aa12a7c3743c83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727096885162755257,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e91ef888f8ec955bef6dc9006e9b04cd7f0c520501780bf227a51838b9b055d5,PodSandboxId:1208cacfef830900c03332e3e25064f9922051e5f615eed5f353e9839bca7a0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727096884150521741,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbda806-091c-4e48-982a-296bbf03abd6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5776f6ad95118b2c81dead9b92b71a822195a4bef5adbf5871dcef1697e6d5a6,PodSandboxId:91947e1a82d06b511d7c18ef8debffb602cc4a5086f7adf39c515c6c7780dfe4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727096883437498706,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31662037f5b073826dbc31fa11734016648662a603866125155e58446d4c73fe,PodSandboxId:62c7b6d1bfb3fcf355ef1262dbf0a6981964acecb49c05105bf7368bae4ee0f2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727096862113200841,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 439831b6eefde7ddc923373d885892d5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6527aca4afc7a32189ae29950d639e34564886f591210b00866727f72ecf2617,PodSandboxId:dd9ae27e8638bebbdcb54e62019125f61b83446a30ad75f1e242f56744544025,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727096850427518493,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:fddebc96422aa20750bd4deb2fa7a71b609a0b73820282e5572365906bad733d,PodSandboxId:190c2732e8a28ab3ed97d3ff86abb432daa7bfa8bfbeed10c4fbb8fea19647cb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727096850135206176,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcc4e39b
b9f3f77cad4a321dfd137c68026104431291fca9781d2bc69c01eda2,PodSandboxId:757baf6c9026cc6a6c35376a447df19610b1c547d345e66b3943e826e53d744b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096850108299496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0267858752e0468b892f98165ea7b1e17a2afde6ca05faccacf5ab35984ae965,PodSandboxId:694418cca7eb400f2a4cb270d0a9f891885c67d0bff9eeba86619473c970f3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727096850189041185,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c4427bd859a561476ab926494c2f6a0c2babe6ae47fd7b07d495a1ccb47adbb,PodSandboxId:c1b4e30bea3c71aa0ef8865f692497339ab86e89f3a99aa3e70cd62bf3002a45,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727096850027124073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:063ee6e5cb4852bcf99e81658a40b6a882427bed47d9cff993d0a1d51f047fab,PodSandboxId:bcf3a61d4cd3d9b55ee351f4c648a07b5efa211abada5ab0831b1bee698ab227,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727096849904481582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415ae4cebae57e6b1ebe046e97e7cb98,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bb38d637855f4ef3d09f67d9d173fa2f585dfbf0fd48555c4d80a36d7a8096,PodSandboxId:5647b2efd9d715c3975ffa772999aea10dddd8c0ef929e1079aa12a7c3743c83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727096849802494969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7606555cae3af30f14e539fb18c319e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7524fbdf9249559c4c6f8270174b2f08e4a4e1df3189f4130ee8c96ca02c3a6f,PodSandboxId:10a49ceedb6d63dfd25b55150d5d26085608a48663bd0221079001a1cea652a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727096843509165688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c8b3d3e1c9604dd8d7d45c15c2a91a759a62f04a047e5626d57a757a396bd4b,PodSandboxId:01a99cef826dda6f2b65d379c041e96505aa2085b58dd4630a3ae2c0052d503b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727096387328890914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4rksx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 378f72ef-8447-411d-a70b-bb355788eff4,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6494b72ca963ec5a21179322ce5a1a3cd2ecf6063d12290ea8c06659ede25828,PodSandboxId:09f40d2b506132af296453dc4125d2ff70d789a87f1da351ae25a90c863e1c5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727096240450572434,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txcxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6da5f25-f232-4649-9801-f3577210ea2e,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cead05960724ef0a7c164689c7f077c5173bf75483e09a02ea44bf3b5dde8cab,PodSandboxId:d6346e81a93e3ab149256d0f37fd69af6c44f91e6e6662b3720a7bd343554d66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727096240372228978,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6g9x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af485e47-0e78-483e-8f35-a7a4ab53f014,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03670fd92c8a80c9d88e88b722428ce8ea7ed15a32a25c8c4c948685c15fe41c,PodSandboxId:fa074de98ab0bb7558595bb7900fab097f2fa4cf091ae0c9ed5fd5c899cc2044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727096228373784218,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8l5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49216705-6e85-4b98-afbd-f4228b774321,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b6ad938698e107c07b01a67dcc4f6f6f2895a6b2ddc7a269056adab117c0ce,PodSandboxId:8efd7c52e41eb6dd5b30df6dc0b133cb2ffabe08abf473da0e79edcf137bc745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727096228199442884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-drj8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c5535e-7139-441f-9065-ef7d147582d2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfbdbe2c35f63b185f28992c717601392287e693216d7332cfd0b4b6597c8ad,PodSandboxId:46a49b5018b58cc60ab2c080f685d00c187e33e4c7790af775ed5baf71aefdca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727096215629506566,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a4f10af129576cf98e9295b3acebd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c9e8fb5e944bc800446956248067c039e5c452de2651adf100841c5f062a431,PodSandboxId:e4cdc1cb583f42c1cf64e136ebe20075107963fc13da9144c568b67897e7e8a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1727096215612604563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-097312,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e691f4013a742318fc23cd46bae362e8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=38103d78-3d91-4707-8908-7f38579825a7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7d27e0a2d3851       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   1208cacfef830       storage-provisioner
	327bfbcf6b79a       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   2                   bcf3a61d4cd3d       kube-controller-manager-ha-097312
	1f3d21af63b5a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            3                   5647b2efd9d71       kube-apiserver-ha-097312
	e91ef888f8ec9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   1208cacfef830       storage-provisioner
	5776f6ad95118       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   91947e1a82d06       busybox-7dff88458-4rksx
	31662037f5b07       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   62c7b6d1bfb3f       kube-vip-ha-097312
	6527aca4afc7a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      5 minutes ago       Running             kube-proxy                1                   dd9ae27e8638b       kube-proxy-drj8m
	0267858752e04       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      5 minutes ago       Running             kube-scheduler            1                   694418cca7eb4       kube-scheduler-ha-097312
	fddebc96422aa       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   190c2732e8a28       kindnet-j8l5t
	bcc4e39bb9f3f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   757baf6c9026c       coredns-7c65d6cfc9-txcxz
	1c4427bd859a5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   c1b4e30bea3c7       etcd-ha-097312
	063ee6e5cb485       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      5 minutes ago       Exited              kube-controller-manager   1                   bcf3a61d4cd3d       kube-controller-manager-ha-097312
	f3bb38d637855       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      5 minutes ago       Exited              kube-apiserver            2                   5647b2efd9d71       kube-apiserver-ha-097312
	7524fbdf92495       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   10a49ceedb6d6       coredns-7c65d6cfc9-6g9x2
	0c8b3d3e1c960       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   01a99cef826dd       busybox-7dff88458-4rksx
	6494b72ca963e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   09f40d2b50613       coredns-7c65d6cfc9-txcxz
	cead05960724e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   d6346e81a93e3       coredns-7c65d6cfc9-6g9x2
	03670fd92c8a8       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      15 minutes ago      Exited              kindnet-cni               0                   fa074de98ab0b       kindnet-j8l5t
	37b6ad938698e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      15 minutes ago      Exited              kube-proxy                0                   8efd7c52e41eb       kube-proxy-drj8m
	9bfbdbe2c35f6       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      16 minutes ago      Exited              etcd                      0                   46a49b5018b58       etcd-ha-097312
	5c9e8fb5e944b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      16 minutes ago      Exited              kube-scheduler            0                   e4cdc1cb583f4       kube-scheduler-ha-097312
	
	
	==> coredns [6494b72ca963ec5a21179322ce5a1a3cd2ecf6063d12290ea8c06659ede25828] <==
	[INFO] 10.244.0.4:56395 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000159124s
	[INFO] 10.244.2.2:48128 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168767s
	[INFO] 10.244.2.2:38686 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001366329s
	[INFO] 10.244.2.2:54280 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098386s
	[INFO] 10.244.2.2:36178 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083893s
	[INFO] 10.244.1.2:36479 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151724s
	[INFO] 10.244.1.2:52581 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000183399s
	[INFO] 10.244.1.2:36358 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00015472s
	[INFO] 10.244.0.4:37418 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198313s
	[INFO] 10.244.2.2:52660 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011216s
	[INFO] 10.244.1.2:33460 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123493s
	[INFO] 10.244.1.2:42619 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000187646s
	[INFO] 10.244.0.4:50282 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110854s
	[INFO] 10.244.0.4:48865 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000169177s
	[INFO] 10.244.0.4:52671 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110814s
	[INFO] 10.244.2.2:49013 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000236486s
	[INFO] 10.244.2.2:37600 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000236051s
	[INFO] 10.244.2.2:54687 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000137539s
	[INFO] 10.244.1.2:37754 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000237319s
	[INFO] 10.244.1.2:50571 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000167449s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1747&timeout=6m52s&timeoutSeconds=412&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	
	
	==> coredns [7524fbdf9249559c4c6f8270174b2f08e4a4e1df3189f4130ee8c96ca02c3a6f] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1105559606]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (23-Sep-2024 13:07:34.664) (total time: 10000ms):
	Trace[1105559606]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (13:07:44.664)
	Trace[1105559606]: [10.00085903s] [10.00085903s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[342942772]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (23-Sep-2024 13:07:39.227) (total time: 10001ms):
	Trace[342942772]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:07:49.229)
	Trace[342942772]: [10.001532009s] [10.001532009s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [bcc4e39bb9f3f77cad4a321dfd137c68026104431291fca9781d2bc69c01eda2] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:44944->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:44944->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:44928->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:44928->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:44926->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1508766833]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (23-Sep-2024 13:07:43.940) (total time: 10664ms):
	Trace[1508766833]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:44926->10.96.0.1:443: read: connection reset by peer 10663ms (13:07:54.604)
	Trace[1508766833]: [10.664217655s] [10.664217655s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:44926->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [cead05960724ef0a7c164689c7f077c5173bf75483e09a02ea44bf3b5dde8cab] <==
	[INFO] 10.244.2.2:57929 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002002291s
	[INFO] 10.244.2.2:39920 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000241567s
	[INFO] 10.244.2.2:40496 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084082s
	[INFO] 10.244.1.2:53956 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001953841s
	[INFO] 10.244.1.2:39693 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161735s
	[INFO] 10.244.1.2:59255 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001392042s
	[INFO] 10.244.1.2:33162 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000137674s
	[INFO] 10.244.1.2:56819 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135224s
	[INFO] 10.244.0.4:58065 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142108s
	[INFO] 10.244.0.4:49950 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114547s
	[INFO] 10.244.0.4:48467 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051186s
	[INFO] 10.244.2.2:57485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120774s
	[INFO] 10.244.2.2:47368 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105596s
	[INFO] 10.244.2.2:52953 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077623s
	[INFO] 10.244.1.2:45470 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011128s
	[INFO] 10.244.1.2:35601 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000157053s
	[INFO] 10.244.0.4:60925 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000610878s
	[INFO] 10.244.2.2:48335 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000176802s
	[INFO] 10.244.1.2:39758 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190843s
	[INFO] 10.244.1.2:35713 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110523s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1747&timeout=6m4s&timeoutSeconds=364&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1798&timeout=7m54s&timeoutSeconds=474&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1740&timeout=9m58s&timeoutSeconds=598&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-097312
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-097312
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-097312
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T12_57_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:57:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-097312
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:13:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:08:09 +0000   Mon, 23 Sep 2024 12:57:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:08:09 +0000   Mon, 23 Sep 2024 12:57:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:08:09 +0000   Mon, 23 Sep 2024 12:57:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:08:09 +0000   Mon, 23 Sep 2024 12:57:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.160
	  Hostname:    ha-097312
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fef43eb48e8a42b5815ed7c921d42333
	  System UUID:                fef43eb4-8e8a-42b5-815e-d7c921d42333
	  Boot ID:                    22749ef5-5a8a-4d9f-b42e-96dd2d4e32eb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4rksx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-6g9x2             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-7c65d6cfc9-txcxz             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-ha-097312                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-j8l5t                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-097312             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-097312    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-drj8m                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-097312             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-097312                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m47s                  kube-proxy       
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  15m                    kubelet          Node ha-097312 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m                    kubelet          Node ha-097312 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    15m                    kubelet          Node ha-097312 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           15m                    node-controller  Node ha-097312 event: Registered Node ha-097312 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-097312 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-097312 event: Registered Node ha-097312 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-097312 event: Registered Node ha-097312 in Controller
	  Warning  ContainerGCFailed        5m58s (x2 over 6m58s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             5m45s (x3 over 6m34s)  kubelet          Node ha-097312 status is now: NodeNotReady
	  Normal   RegisteredNode           4m49s                  node-controller  Node ha-097312 event: Registered Node ha-097312 in Controller
	  Normal   RegisteredNode           4m41s                  node-controller  Node ha-097312 event: Registered Node ha-097312 in Controller
	  Normal   RegisteredNode           3m16s                  node-controller  Node ha-097312 event: Registered Node ha-097312 in Controller
	
	
	Name:               ha-097312-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-097312-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-097312
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T12_57_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:57:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-097312-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:13:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:08:55 +0000   Mon, 23 Sep 2024 13:08:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:08:55 +0000   Mon, 23 Sep 2024 13:08:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:08:55 +0000   Mon, 23 Sep 2024 13:08:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:08:55 +0000   Mon, 23 Sep 2024 13:08:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.214
	  Hostname:    ha-097312-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 226ea4f6db5b44f7bdab73033cb7ae33
	  System UUID:                226ea4f6-db5b-44f7-bdab-73033cb7ae33
	  Boot ID:                    d82097e7-308e-44f7-a550-0d3292edbeaf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wz97n                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-097312-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-hcclj                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-097312-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-097312-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-z6ss5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-097312-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-097312-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m43s                  kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-097312-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-097312-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-097312-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-097312-m02 event: Registered Node ha-097312-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-097312-m02 event: Registered Node ha-097312-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-097312-m02 event: Registered Node ha-097312-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-097312-m02 status is now: NodeNotReady
	  Normal  Starting                 5m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m14s (x8 over 5m14s)  kubelet          Node ha-097312-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m14s (x8 over 5m14s)  kubelet          Node ha-097312-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m14s (x7 over 5m14s)  kubelet          Node ha-097312-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m49s                  node-controller  Node ha-097312-m02 event: Registered Node ha-097312-m02 in Controller
	  Normal  RegisteredNode           4m41s                  node-controller  Node ha-097312-m02 event: Registered Node ha-097312-m02 in Controller
	  Normal  RegisteredNode           3m16s                  node-controller  Node ha-097312-m02 event: Registered Node ha-097312-m02 in Controller
	
	
	Name:               ha-097312-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-097312-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-097312
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T13_00_25_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 13:00:25 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-097312-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:10:32 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 23 Sep 2024 13:10:12 +0000   Mon, 23 Sep 2024 13:11:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 23 Sep 2024 13:10:12 +0000   Mon, 23 Sep 2024 13:11:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 23 Sep 2024 13:10:12 +0000   Mon, 23 Sep 2024 13:11:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 23 Sep 2024 13:10:12 +0000   Mon, 23 Sep 2024 13:11:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.20
	  Hostname:    ha-097312-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 23903b49596849ed8163495c455231a4
	  System UUID:                23903b49-5968-49ed-8163-495c455231a4
	  Boot ID:                    08d8ee6f-9bbf-458d-8f61-8151e6dbaa95
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-pw88p    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-pzs94              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-proxy-7hlnw           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   RegisteredNode           12m                    node-controller  Node ha-097312-m04 event: Registered Node ha-097312-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-097312-m04 event: Registered Node ha-097312-m04 in Controller
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-097312-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-097312-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-097312-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-097312-m04 event: Registered Node ha-097312-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-097312-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m49s                  node-controller  Node ha-097312-m04 event: Registered Node ha-097312-m04 in Controller
	  Normal   RegisteredNode           4m41s                  node-controller  Node ha-097312-m04 event: Registered Node ha-097312-m04 in Controller
	  Normal   NodeNotReady             4m9s                   node-controller  Node ha-097312-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m16s                  node-controller  Node ha-097312-m04 event: Registered Node ha-097312-m04 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-097312-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-097312-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-097312-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-097312-m04 has been rebooted, boot id: 08d8ee6f-9bbf-458d-8f61-8151e6dbaa95
	  Normal   NodeReady                2m48s                  kubelet          Node ha-097312-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s                   node-controller  Node ha-097312-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.704633] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.056129] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055848] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.170191] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.146996] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.300750] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +3.930853] systemd-fstab-generator[752]: Ignoring "noauto" option for root device
	[  +3.791133] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.059635] kauditd_printk_skb: 158 callbacks suppressed
	[Sep23 12:57] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.088641] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.268527] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.165221] kauditd_printk_skb: 38 callbacks suppressed
	[Sep23 12:58] kauditd_printk_skb: 24 callbacks suppressed
	[Sep23 13:07] systemd-fstab-generator[3542]: Ignoring "noauto" option for root device
	[  +0.151805] systemd-fstab-generator[3554]: Ignoring "noauto" option for root device
	[  +0.171785] systemd-fstab-generator[3568]: Ignoring "noauto" option for root device
	[  +0.153828] systemd-fstab-generator[3580]: Ignoring "noauto" option for root device
	[  +0.296871] systemd-fstab-generator[3608]: Ignoring "noauto" option for root device
	[  +2.935420] systemd-fstab-generator[3703]: Ignoring "noauto" option for root device
	[  +6.615420] kauditd_printk_skb: 132 callbacks suppressed
	[ +12.051283] kauditd_printk_skb: 75 callbacks suppressed
	[ +10.055590] kauditd_printk_skb: 1 callbacks suppressed
	[Sep23 13:08] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [1c4427bd859a561476ab926494c2f6a0c2babe6ae47fd7b07d495a1ccb47adbb] <==
	{"level":"info","ts":"2024-09-23T13:09:36.440269Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6b56431cc78e971c","remote-peer-id":"78afa68a47379fab"}
	{"level":"info","ts":"2024-09-23T13:09:36.443729Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6b56431cc78e971c","to":"78afa68a47379fab","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-23T13:09:36.443861Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"6b56431cc78e971c","remote-peer-id":"78afa68a47379fab"}
	{"level":"warn","ts":"2024-09-23T13:10:15.816259Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.600782ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-097312-m04\" ","response":"range_response_count:1 size:3394"}
	{"level":"info","ts":"2024-09-23T13:10:15.816613Z","caller":"traceutil/trace.go:171","msg":"trace[682167175] range","detail":"{range_begin:/registry/minions/ha-097312-m04; range_end:; response_count:1; response_revision:2513; }","duration":"148.008469ms","start":"2024-09-23T13:10:15.668577Z","end":"2024-09-23T13:10:15.816585Z","steps":["trace[682167175] 'range keys from in-memory index tree'  (duration: 146.385978ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T13:10:26.097838Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.39.174:57002","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-09-23T13:10:26.130890Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b56431cc78e971c switched to configuration voters=(7734443200941561628 16460858273394025899)"}
	{"level":"info","ts":"2024-09-23T13:10:26.133077Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"1dec7d0c7f2d2dcb","local-member-id":"6b56431cc78e971c","removed-remote-peer-id":"78afa68a47379fab","removed-remote-peer-urls":["https://192.168.39.174:2380"]}
	{"level":"info","ts":"2024-09-23T13:10:26.133174Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"78afa68a47379fab"}
	{"level":"warn","ts":"2024-09-23T13:10:26.133249Z","caller":"etcdserver/server.go:987","msg":"rejected Raft message from removed member","local-member-id":"6b56431cc78e971c","removed-member-id":"78afa68a47379fab"}
	{"level":"warn","ts":"2024-09-23T13:10:26.133394Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2024-09-23T13:10:26.133473Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"78afa68a47379fab"}
	{"level":"info","ts":"2024-09-23T13:10:26.133529Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"78afa68a47379fab"}
	{"level":"warn","ts":"2024-09-23T13:10:26.133874Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"78afa68a47379fab"}
	{"level":"info","ts":"2024-09-23T13:10:26.133966Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"78afa68a47379fab"}
	{"level":"info","ts":"2024-09-23T13:10:26.134064Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b56431cc78e971c","remote-peer-id":"78afa68a47379fab"}
	{"level":"warn","ts":"2024-09-23T13:10:26.134283Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b56431cc78e971c","remote-peer-id":"78afa68a47379fab","error":"context canceled"}
	{"level":"warn","ts":"2024-09-23T13:10:26.134413Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"78afa68a47379fab","error":"failed to read 78afa68a47379fab on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-23T13:10:26.134513Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b56431cc78e971c","remote-peer-id":"78afa68a47379fab"}
	{"level":"warn","ts":"2024-09-23T13:10:26.134782Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6b56431cc78e971c","remote-peer-id":"78afa68a47379fab","error":"context canceled"}
	{"level":"info","ts":"2024-09-23T13:10:26.134846Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b56431cc78e971c","remote-peer-id":"78afa68a47379fab"}
	{"level":"info","ts":"2024-09-23T13:10:26.134886Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"78afa68a47379fab"}
	{"level":"info","ts":"2024-09-23T13:10:26.134923Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"6b56431cc78e971c","removed-remote-peer-id":"78afa68a47379fab"}
	{"level":"warn","ts":"2024-09-23T13:10:26.145382Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"6b56431cc78e971c","remote-peer-id-stream-handler":"6b56431cc78e971c","remote-peer-id-from":"78afa68a47379fab"}
	{"level":"warn","ts":"2024-09-23T13:10:26.148189Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"6b56431cc78e971c","remote-peer-id-stream-handler":"6b56431cc78e971c","remote-peer-id-from":"78afa68a47379fab"}
	
	
	==> etcd [9bfbdbe2c35f63b185f28992c717601392287e693216d7332cfd0b4b6597c8ad] <==
	{"level":"warn","ts":"2024-09-23T13:05:47.883071Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T13:05:46.700228Z","time spent":"1.182836709s","remote":"127.0.0.1:35406","response type":"/etcdserverpb.KV/Range","request count":0,"request size":91,"response count":0,"response size":0,"request content":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" limit:10000 "}
	2024/09/23 13:05:47 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-23T13:05:47.943576Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.160:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T13:05:47.943739Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.160:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-23T13:05:47.945279Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6b56431cc78e971c","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-23T13:05:47.945570Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e470b762e3b365ab"}
	{"level":"info","ts":"2024-09-23T13:05:47.945660Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e470b762e3b365ab"}
	{"level":"info","ts":"2024-09-23T13:05:47.945705Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e470b762e3b365ab"}
	{"level":"info","ts":"2024-09-23T13:05:47.945841Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab"}
	{"level":"info","ts":"2024-09-23T13:05:47.945896Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab"}
	{"level":"info","ts":"2024-09-23T13:05:47.945950Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b56431cc78e971c","remote-peer-id":"e470b762e3b365ab"}
	{"level":"info","ts":"2024-09-23T13:05:47.945977Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e470b762e3b365ab"}
	{"level":"info","ts":"2024-09-23T13:05:47.946000Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"78afa68a47379fab"}
	{"level":"info","ts":"2024-09-23T13:05:47.946028Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"78afa68a47379fab"}
	{"level":"info","ts":"2024-09-23T13:05:47.946126Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"78afa68a47379fab"}
	{"level":"info","ts":"2024-09-23T13:05:47.946323Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b56431cc78e971c","remote-peer-id":"78afa68a47379fab"}
	{"level":"info","ts":"2024-09-23T13:05:47.946389Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b56431cc78e971c","remote-peer-id":"78afa68a47379fab"}
	{"level":"info","ts":"2024-09-23T13:05:47.946441Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b56431cc78e971c","remote-peer-id":"78afa68a47379fab"}
	{"level":"info","ts":"2024-09-23T13:05:47.946468Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"78afa68a47379fab"}
	{"level":"info","ts":"2024-09-23T13:05:47.951521Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.160:2380"}
	{"level":"warn","ts":"2024-09-23T13:05:47.951546Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.26307996s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-23T13:05:47.951712Z","caller":"traceutil/trace.go:171","msg":"trace[672836981] range","detail":"{range_begin:; range_end:; }","duration":"9.263262793s","start":"2024-09-23T13:05:38.688440Z","end":"2024-09-23T13:05:47.951702Z","steps":["trace[672836981] 'agreement among raft nodes before linearized reading'  (duration: 9.263077519s)"],"step_count":1}
	{"level":"error","ts":"2024-09-23T13:05:47.951772Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-23T13:05:47.951730Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.160:2380"}
	{"level":"info","ts":"2024-09-23T13:05:47.951842Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-097312","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.160:2380"],"advertise-client-urls":["https://192.168.39.160:2379"]}
	
	
	==> kernel <==
	 13:13:00 up 16 min,  0 users,  load average: 0.09, 0.30, 0.25
	Linux ha-097312 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [03670fd92c8a80c9d88e88b722428ce8ea7ed15a32a25c8c4c948685c15fe41c] <==
	I0923 13:05:19.635787       1 main.go:295] Handling node with IPs: map[192.168.39.174:{}]
	I0923 13:05:19.635838       1 main.go:322] Node ha-097312-m03 has CIDR [10.244.2.0/24] 
	I0923 13:05:19.636002       1 main.go:295] Handling node with IPs: map[192.168.39.20:{}]
	I0923 13:05:19.636028       1 main.go:322] Node ha-097312-m04 has CIDR [10.244.3.0/24] 
	I0923 13:05:19.636074       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0923 13:05:19.636081       1 main.go:299] handling current node
	I0923 13:05:19.636092       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0923 13:05:19.636096       1 main.go:322] Node ha-097312-m02 has CIDR [10.244.1.0/24] 
	E0923 13:05:26.765078       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1798&timeout=8m22s&timeoutSeconds=502&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	I0923 13:05:29.635162       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0923 13:05:29.635212       1 main.go:299] handling current node
	I0923 13:05:29.635229       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0923 13:05:29.635235       1 main.go:322] Node ha-097312-m02 has CIDR [10.244.1.0/24] 
	I0923 13:05:29.635376       1 main.go:295] Handling node with IPs: map[192.168.39.174:{}]
	I0923 13:05:29.635395       1 main.go:322] Node ha-097312-m03 has CIDR [10.244.2.0/24] 
	I0923 13:05:29.635438       1 main.go:295] Handling node with IPs: map[192.168.39.20:{}]
	I0923 13:05:29.635443       1 main.go:322] Node ha-097312-m04 has CIDR [10.244.3.0/24] 
	I0923 13:05:39.644516       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0923 13:05:39.644682       1 main.go:299] handling current node
	I0923 13:05:39.644716       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0923 13:05:39.644740       1 main.go:322] Node ha-097312-m02 has CIDR [10.244.1.0/24] 
	I0923 13:05:39.644975       1 main.go:295] Handling node with IPs: map[192.168.39.174:{}]
	I0923 13:05:39.645100       1 main.go:322] Node ha-097312-m03 has CIDR [10.244.2.0/24] 
	I0923 13:05:39.645189       1 main.go:295] Handling node with IPs: map[192.168.39.20:{}]
	I0923 13:05:39.645210       1 main.go:322] Node ha-097312-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [fddebc96422aa20750bd4deb2fa7a71b609a0b73820282e5572365906bad733d] <==
	I0923 13:12:11.353177       1 main.go:322] Node ha-097312-m04 has CIDR [10.244.3.0/24] 
	I0923 13:12:21.344668       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0923 13:12:21.344778       1 main.go:299] handling current node
	I0923 13:12:21.344807       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0923 13:12:21.344813       1 main.go:322] Node ha-097312-m02 has CIDR [10.244.1.0/24] 
	I0923 13:12:21.345006       1 main.go:295] Handling node with IPs: map[192.168.39.20:{}]
	I0923 13:12:21.345063       1 main.go:322] Node ha-097312-m04 has CIDR [10.244.3.0/24] 
	I0923 13:12:31.344482       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0923 13:12:31.344710       1 main.go:299] handling current node
	I0923 13:12:31.344763       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0923 13:12:31.344784       1 main.go:322] Node ha-097312-m02 has CIDR [10.244.1.0/24] 
	I0923 13:12:31.344956       1 main.go:295] Handling node with IPs: map[192.168.39.20:{}]
	I0923 13:12:31.344992       1 main.go:322] Node ha-097312-m04 has CIDR [10.244.3.0/24] 
	I0923 13:12:41.352807       1 main.go:295] Handling node with IPs: map[192.168.39.20:{}]
	I0923 13:12:41.352847       1 main.go:322] Node ha-097312-m04 has CIDR [10.244.3.0/24] 
	I0923 13:12:41.353006       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0923 13:12:41.353026       1 main.go:299] handling current node
	I0923 13:12:41.353038       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0923 13:12:41.353042       1 main.go:322] Node ha-097312-m02 has CIDR [10.244.1.0/24] 
	I0923 13:12:51.344422       1 main.go:295] Handling node with IPs: map[192.168.39.214:{}]
	I0923 13:12:51.344525       1 main.go:322] Node ha-097312-m02 has CIDR [10.244.1.0/24] 
	I0923 13:12:51.344720       1 main.go:295] Handling node with IPs: map[192.168.39.20:{}]
	I0923 13:12:51.344753       1 main.go:322] Node ha-097312-m04 has CIDR [10.244.3.0/24] 
	I0923 13:12:51.344838       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0923 13:12:51.345206       1 main.go:299] handling current node
	
	
	==> kube-apiserver [1f3d21af63b5abeda02040f268e6cc8e42b9f5c0d833e8e462586290e7f1d4c6] <==
	I0923 13:08:07.237145       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0923 13:08:07.346131       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0923 13:08:07.347574       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0923 13:08:07.347711       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0923 13:08:07.358076       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0923 13:08:07.358665       1 aggregator.go:171] initial CRD sync complete...
	I0923 13:08:07.358777       1 autoregister_controller.go:144] Starting autoregister controller
	I0923 13:08:07.358846       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0923 13:08:07.358950       1 cache.go:39] Caches are synced for autoregister controller
	I0923 13:08:07.364287       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0923 13:08:07.381264       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0923 13:08:07.381366       1 policy_source.go:224] refreshing policies
	W0923 13:08:07.381718       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.174]
	I0923 13:08:07.384081       1 controller.go:615] quota admission added evaluator for: endpoints
	I0923 13:08:07.405710       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0923 13:08:07.414076       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0923 13:08:07.418198       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0923 13:08:07.418352       1 shared_informer.go:320] Caches are synced for configmaps
	I0923 13:08:07.436510       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0923 13:08:07.438221       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0923 13:08:07.443350       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0923 13:08:07.469912       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0923 13:08:08.250797       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0923 13:08:08.734248       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.160 192.168.39.174]
	W0923 13:10:38.740234       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.160 192.168.39.214]
	
	
	==> kube-apiserver [f3bb38d637855f4ef3d09f67d9d173fa2f585dfbf0fd48555c4d80a36d7a8096] <==
	I0923 13:07:30.126591       1 options.go:228] external host was not specified, using 192.168.39.160
	I0923 13:07:30.129914       1 server.go:142] Version: v1.31.1
	I0923 13:07:30.129966       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:07:31.071830       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0923 13:07:31.078049       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0923 13:07:31.078478       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0923 13:07:31.078717       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0923 13:07:31.079040       1 instance.go:232] Using reconciler: lease
	W0923 13:07:51.062203       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0923 13:07:51.062304       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0923 13:07:51.080240       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0923 13:07:51.080248       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [063ee6e5cb4852bcf99e81658a40b6a882427bed47d9cff993d0a1d51f047fab] <==
	I0923 13:07:31.369865       1 serving.go:386] Generated self-signed cert in-memory
	I0923 13:07:31.569092       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0923 13:07:31.571140       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:07:31.573217       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0923 13:07:31.573356       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0923 13:07:31.573427       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0923 13:07:31.573438       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0923 13:07:52.087869       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.160:8443/healthz\": dial tcp 192.168.39.160:8443: connect: connection refused"
	
	
	==> kube-controller-manager [327bfbcf6b79a16f1a5d0c94377815fc5fbb5bc82e544e68403e7ec0e90448e8] <==
	I0923 13:10:37.064925       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m03"
	I0923 13:10:37.064953       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-097312-m04"
	E0923 13:10:37.115796       1 garbagecollector.go:399] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"storage.k8s.io/v1\", Kind:\"CSINode\", Name:\"ha-097312-m03\", UID:\"5a987b40-0dae-4625-9503-cbfe281dc217\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}
, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-097312-m03\", UID:\"21262f99-4132-4d03-8c23-d7cc5f856a68\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: csinodes.storage.k8s.io \"ha-097312-m03\" not found" logger="UnhandledError"
	E0923 13:10:37.116143       1 garbagecollector.go:399] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"coordination.k8s.io/v1\", Kind:\"Lease\", Name:\"ha-097312-m03\", UID:\"a0589012-3df0-4975-88b9-b1d8ac070501\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"kube-node-lease\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32
{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-097312-m03\", UID:\"21262f99-4132-4d03-8c23-d7cc5f856a68\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io \"ha-097312-m03\" not found" logger="UnhandledError"
	E0923 13:10:39.253731       1 gc_controller.go:151] "Failed to get node" err="node \"ha-097312-m03\" not found" logger="pod-garbage-collector-controller" node="ha-097312-m03"
	E0923 13:10:39.253856       1 gc_controller.go:151] "Failed to get node" err="node \"ha-097312-m03\" not found" logger="pod-garbage-collector-controller" node="ha-097312-m03"
	E0923 13:10:39.253885       1 gc_controller.go:151] "Failed to get node" err="node \"ha-097312-m03\" not found" logger="pod-garbage-collector-controller" node="ha-097312-m03"
	E0923 13:10:39.253912       1 gc_controller.go:151] "Failed to get node" err="node \"ha-097312-m03\" not found" logger="pod-garbage-collector-controller" node="ha-097312-m03"
	E0923 13:10:39.253935       1 gc_controller.go:151] "Failed to get node" err="node \"ha-097312-m03\" not found" logger="pod-garbage-collector-controller" node="ha-097312-m03"
	E0923 13:10:59.254775       1 gc_controller.go:151] "Failed to get node" err="node \"ha-097312-m03\" not found" logger="pod-garbage-collector-controller" node="ha-097312-m03"
	E0923 13:10:59.254874       1 gc_controller.go:151] "Failed to get node" err="node \"ha-097312-m03\" not found" logger="pod-garbage-collector-controller" node="ha-097312-m03"
	E0923 13:10:59.254881       1 gc_controller.go:151] "Failed to get node" err="node \"ha-097312-m03\" not found" logger="pod-garbage-collector-controller" node="ha-097312-m03"
	E0923 13:10:59.254886       1 gc_controller.go:151] "Failed to get node" err="node \"ha-097312-m03\" not found" logger="pod-garbage-collector-controller" node="ha-097312-m03"
	E0923 13:10:59.254891       1 gc_controller.go:151] "Failed to get node" err="node \"ha-097312-m03\" not found" logger="pod-garbage-collector-controller" node="ha-097312-m03"
	I0923 13:11:14.403228       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:11:14.470965       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	I0923 13:11:14.541893       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="94.172157ms"
	I0923 13:11:14.542060       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="57.144µs"
	I0923 13:11:16.926483       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	E0923 13:11:19.255111       1 gc_controller.go:151] "Failed to get node" err="node \"ha-097312-m03\" not found" logger="pod-garbage-collector-controller" node="ha-097312-m03"
	E0923 13:11:19.255151       1 gc_controller.go:151] "Failed to get node" err="node \"ha-097312-m03\" not found" logger="pod-garbage-collector-controller" node="ha-097312-m03"
	E0923 13:11:19.255158       1 gc_controller.go:151] "Failed to get node" err="node \"ha-097312-m03\" not found" logger="pod-garbage-collector-controller" node="ha-097312-m03"
	E0923 13:11:19.255164       1 gc_controller.go:151] "Failed to get node" err="node \"ha-097312-m03\" not found" logger="pod-garbage-collector-controller" node="ha-097312-m03"
	E0923 13:11:19.255169       1 gc_controller.go:151] "Failed to get node" err="node \"ha-097312-m03\" not found" logger="pod-garbage-collector-controller" node="ha-097312-m03"
	I0923 13:11:19.617445       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-097312-m04"
	
	
	==> kube-proxy [37b6ad938698e107c07b01a67dcc4f6f6f2895a6b2ddc7a269056adab117c0ce] <==
	E0923 13:04:28.461329       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1724\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 13:04:31.534469       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 13:04:31.534555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 13:04:31.534498       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1724": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 13:04:31.534825       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1724\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 13:04:31.534691       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-097312&resourceVersion=1710": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 13:04:31.535025       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-097312&resourceVersion=1710\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 13:04:37.676134       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 13:04:37.676214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 13:04:37.676286       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-097312&resourceVersion=1710": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 13:04:37.677122       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-097312&resourceVersion=1710\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 13:04:37.677448       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1724": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 13:04:37.677489       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1724\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 13:04:46.892917       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-097312&resourceVersion=1710": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 13:04:46.893465       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-097312&resourceVersion=1710\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 13:04:49.965185       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1724": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 13:04:49.965405       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1724\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 13:04:49.965692       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 13:04:49.965791       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 13:05:08.396897       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1724": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 13:05:08.397298       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1724\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 13:05:11.468720       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-097312&resourceVersion=1710": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 13:05:11.468781       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-097312&resourceVersion=1710\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 13:05:14.541131       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 13:05:14.541985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [6527aca4afc7a32189ae29950d639e34564886f591210b00866727f72ecf2617] <==
	E0923 13:07:32.780099       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-097312\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0923 13:07:35.854174       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-097312\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0923 13:07:38.924888       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-097312\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0923 13:07:45.069932       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-097312\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0923 13:07:54.285672       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-097312\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0923 13:08:13.276174       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.160"]
	E0923 13:08:13.276324       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 13:08:13.317003       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 13:08:13.317078       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 13:08:13.317112       1 server_linux.go:169] "Using iptables Proxier"
	I0923 13:08:13.321518       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 13:08:13.322030       1 server.go:483] "Version info" version="v1.31.1"
	I0923 13:08:13.322065       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:08:13.325303       1 config.go:199] "Starting service config controller"
	I0923 13:08:13.325426       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 13:08:13.325575       1 config.go:105] "Starting endpoint slice config controller"
	I0923 13:08:13.325728       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 13:08:13.327678       1 config.go:328] "Starting node config controller"
	I0923 13:08:13.327839       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 13:08:13.426756       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 13:08:13.426846       1 shared_informer.go:320] Caches are synced for service config
	I0923 13:08:13.430348       1 shared_informer.go:320] Caches are synced for node config
	W0923 13:11:23.758802       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0923 13:11:23.759083       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0923 13:11:23.759128       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-scheduler [0267858752e0468b892f98165ea7b1e17a2afde6ca05faccacf5ab35984ae965] <==
	W0923 13:08:00.224373       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.160:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.160:8443: connect: connection refused
	E0923 13:08:00.224465       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.160:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8443: connect: connection refused" logger="UnhandledError"
	W0923 13:08:00.329449       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.160:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.160:8443: connect: connection refused
	E0923 13:08:00.329595       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.160:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8443: connect: connection refused" logger="UnhandledError"
	W0923 13:08:00.396316       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.160:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.160:8443: connect: connection refused
	E0923 13:08:00.396386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.160:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8443: connect: connection refused" logger="UnhandledError"
	W0923 13:08:00.667000       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.160:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.160:8443: connect: connection refused
	E0923 13:08:00.667073       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.160:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8443: connect: connection refused" logger="UnhandledError"
	W0923 13:08:01.203706       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.160:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.160:8443: connect: connection refused
	E0923 13:08:01.203841       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.160:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8443: connect: connection refused" logger="UnhandledError"
	W0923 13:08:01.344155       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.160:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.160:8443: connect: connection refused
	E0923 13:08:01.344303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.160:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8443: connect: connection refused" logger="UnhandledError"
	W0923 13:08:01.878611       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.160:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.160:8443: connect: connection refused
	E0923 13:08:01.878892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.160:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8443: connect: connection refused" logger="UnhandledError"
	W0923 13:08:07.258185       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 13:08:07.258276       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 13:08:07.258465       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 13:08:07.258539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:08:07.259334       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 13:08:07.259421       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0923 13:08:11.393527       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0923 13:10:22.858118       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-kk85c\": pod busybox-7dff88458-kk85c is already assigned to node \"ha-097312-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-kk85c" node="ha-097312-m04"
	E0923 13:10:22.858253       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 2b78c921-989b-45b1-98f3-ec26262bc81b(default/busybox-7dff88458-kk85c) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-kk85c"
	E0923 13:10:22.858282       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-kk85c\": pod busybox-7dff88458-kk85c is already assigned to node \"ha-097312-m04\"" pod="default/busybox-7dff88458-kk85c"
	I0923 13:10:22.858305       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-kk85c" node="ha-097312-m04"
	
	
	==> kube-scheduler [5c9e8fb5e944bc800446956248067c039e5c452de2651adf100841c5f062a431] <==
	E0923 12:57:00.190177       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.223708       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 12:57:00.223794       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.255027       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 12:57:00.255136       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:57:00.582968       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 12:57:00.583073       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0923 12:57:02.534371       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0923 12:59:14.854178       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-vs524\": pod kube-proxy-vs524 is already assigned to node \"ha-097312-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-vs524" node="ha-097312-m03"
	E0923 12:59:14.854357       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 92738649-c52b-44d5-866b-8cda751a538c(kube-system/kube-proxy-vs524) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-vs524"
	E0923 12:59:14.854394       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-vs524\": pod kube-proxy-vs524 is already assigned to node \"ha-097312-m03\"" pod="kube-system/kube-proxy-vs524"
	I0923 12:59:14.854436       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-vs524" node="ha-097312-m03"
	E0923 13:05:38.218871       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0923 13:05:38.347872       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0923 13:05:38.559003       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0923 13:05:38.616843       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0923 13:05:38.689154       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0923 13:05:40.389602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0923 13:05:41.850274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0923 13:05:43.602431       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0923 13:05:43.741733       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0923 13:05:45.717897       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0923 13:05:45.847800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0923 13:05:46.585273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0923 13:05:47.874452       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 23 13:11:22 ha-097312 kubelet[1304]: E0923 13:11:22.360453    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097082360109405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:11:22 ha-097312 kubelet[1304]: E0923 13:11:22.360489    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097082360109405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:11:32 ha-097312 kubelet[1304]: E0923 13:11:32.362280    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097092361689691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:11:32 ha-097312 kubelet[1304]: E0923 13:11:32.362361    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097092361689691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:11:42 ha-097312 kubelet[1304]: E0923 13:11:42.364194    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097102363897526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:11:42 ha-097312 kubelet[1304]: E0923 13:11:42.364231    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097102363897526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:11:52 ha-097312 kubelet[1304]: E0923 13:11:52.366218    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097112365812284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:11:52 ha-097312 kubelet[1304]: E0923 13:11:52.366255    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097112365812284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:12:02 ha-097312 kubelet[1304]: E0923 13:12:02.165089    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 13:12:02 ha-097312 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 13:12:02 ha-097312 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 13:12:02 ha-097312 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 13:12:02 ha-097312 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 13:12:02 ha-097312 kubelet[1304]: E0923 13:12:02.368095    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097122367240692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:12:02 ha-097312 kubelet[1304]: E0923 13:12:02.368119    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097122367240692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:12:12 ha-097312 kubelet[1304]: E0923 13:12:12.376837    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097132371244107,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:12:12 ha-097312 kubelet[1304]: E0923 13:12:12.377843    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097132371244107,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:12:22 ha-097312 kubelet[1304]: E0923 13:12:22.381011    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097142380274077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:12:22 ha-097312 kubelet[1304]: E0923 13:12:22.381113    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097142380274077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:12:32 ha-097312 kubelet[1304]: E0923 13:12:32.383165    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097152382744717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:12:32 ha-097312 kubelet[1304]: E0923 13:12:32.383190    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097152382744717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:12:42 ha-097312 kubelet[1304]: E0923 13:12:42.385004    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097162384152270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:12:42 ha-097312 kubelet[1304]: E0923 13:12:42.385305    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097162384152270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:12:52 ha-097312 kubelet[1304]: E0923 13:12:52.388874    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097172387315348,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:12:52 ha-097312 kubelet[1304]: E0923 13:12:52.388920    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727097172387315348,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 13:12:59.462520  690530 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19690-662205/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-097312 -n ha-097312
helpers_test.go:261: (dbg) Run:  kubectl --context ha-097312 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (323.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-851928
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-851928
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-851928: exit status 82 (2m1.888304387s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-851928-m03"  ...
	* Stopping node "multinode-851928-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-851928" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-851928 --wait=true -v=8 --alsologtostderr
E0923 13:30:29.178600  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:30:36.849984  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-851928 --wait=true -v=8 --alsologtostderr: (3m19.249508688s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-851928
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-851928 -n multinode-851928
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-851928 logs -n 25: (1.452429061s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-851928 ssh -n                                                                 | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-851928 cp multinode-851928-m02:/home/docker/cp-test.txt                       | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1094698981/001/cp-test_multinode-851928-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-851928 ssh -n                                                                 | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-851928 cp multinode-851928-m02:/home/docker/cp-test.txt                       | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928:/home/docker/cp-test_multinode-851928-m02_multinode-851928.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-851928 ssh -n                                                                 | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-851928 ssh -n multinode-851928 sudo cat                                       | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | /home/docker/cp-test_multinode-851928-m02_multinode-851928.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-851928 cp multinode-851928-m02:/home/docker/cp-test.txt                       | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928-m03:/home/docker/cp-test_multinode-851928-m02_multinode-851928-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-851928 ssh -n                                                                 | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-851928 ssh -n multinode-851928-m03 sudo cat                                   | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | /home/docker/cp-test_multinode-851928-m02_multinode-851928-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-851928 cp testdata/cp-test.txt                                                | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-851928 ssh -n                                                                 | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-851928 cp multinode-851928-m03:/home/docker/cp-test.txt                       | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1094698981/001/cp-test_multinode-851928-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-851928 ssh -n                                                                 | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-851928 cp multinode-851928-m03:/home/docker/cp-test.txt                       | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928:/home/docker/cp-test_multinode-851928-m03_multinode-851928.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-851928 ssh -n                                                                 | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-851928 ssh -n multinode-851928 sudo cat                                       | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | /home/docker/cp-test_multinode-851928-m03_multinode-851928.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-851928 cp multinode-851928-m03:/home/docker/cp-test.txt                       | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928-m02:/home/docker/cp-test_multinode-851928-m03_multinode-851928-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-851928 ssh -n                                                                 | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-851928 ssh -n multinode-851928-m02 sudo cat                                   | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | /home/docker/cp-test_multinode-851928-m03_multinode-851928-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-851928 node stop m03                                                          | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	| node    | multinode-851928 node start                                                             | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-851928                                                                | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC |                     |
	| stop    | -p multinode-851928                                                                     | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC |                     |
	| start   | -p multinode-851928                                                                     | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:29 UTC | 23 Sep 24 13:33 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-851928                                                                | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:33 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 13:29:50
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 13:29:50.670991  700346 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:29:50.671159  700346 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:29:50.671170  700346 out.go:358] Setting ErrFile to fd 2...
	I0923 13:29:50.671174  700346 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:29:50.671356  700346 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-662205/.minikube/bin
	I0923 13:29:50.672020  700346 out.go:352] Setting JSON to false
	I0923 13:29:50.673098  700346 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":11534,"bootTime":1727086657,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 13:29:50.673180  700346 start.go:139] virtualization: kvm guest
	I0923 13:29:50.675424  700346 out.go:177] * [multinode-851928] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 13:29:50.676722  700346 notify.go:220] Checking for updates...
	I0923 13:29:50.676747  700346 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 13:29:50.678331  700346 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:29:50.679738  700346 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 13:29:50.681319  700346 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 13:29:50.682751  700346 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 13:29:50.684091  700346 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 13:29:50.685904  700346 config.go:182] Loaded profile config "multinode-851928": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:29:50.686026  700346 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:29:50.686516  700346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:29:50.686566  700346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:29:50.702387  700346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43801
	I0923 13:29:50.702982  700346 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:29:50.703653  700346 main.go:141] libmachine: Using API Version  1
	I0923 13:29:50.703675  700346 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:29:50.704055  700346 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:29:50.704247  700346 main.go:141] libmachine: (multinode-851928) Calling .DriverName
	I0923 13:29:50.743263  700346 out.go:177] * Using the kvm2 driver based on existing profile
	I0923 13:29:50.744641  700346 start.go:297] selected driver: kvm2
	I0923 13:29:50.744665  700346 start.go:901] validating driver "kvm2" against &{Name:multinode-851928 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-851928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.25 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:29:50.744836  700346 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 13:29:50.745192  700346 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 13:29:50.745281  700346 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19690-662205/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 13:29:50.761702  700346 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 13:29:50.762472  700346 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:29:50.762526  700346 cni.go:84] Creating CNI manager for ""
	I0923 13:29:50.762589  700346 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0923 13:29:50.762669  700346 start.go:340] cluster config:
	{Name:multinode-851928 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-851928 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.25 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:29:50.762819  700346 iso.go:125] acquiring lock: {Name:mkb968a95eae3838cd5c328cf3385c2ef4ff2c8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 13:29:50.764902  700346 out.go:177] * Starting "multinode-851928" primary control-plane node in "multinode-851928" cluster
	I0923 13:29:50.766222  700346 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 13:29:50.766297  700346 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 13:29:50.766309  700346 cache.go:56] Caching tarball of preloaded images
	I0923 13:29:50.766418  700346 preload.go:172] Found /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 13:29:50.766429  700346 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 13:29:50.766585  700346 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/multinode-851928/config.json ...
	I0923 13:29:50.766840  700346 start.go:360] acquireMachinesLock for multinode-851928: {Name:mka98570d4b4becad22300323f1f88e64743eec3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 13:29:50.766894  700346 start.go:364] duration metric: took 31.116µs to acquireMachinesLock for "multinode-851928"
	I0923 13:29:50.766909  700346 start.go:96] Skipping create...Using existing machine configuration
	I0923 13:29:50.766915  700346 fix.go:54] fixHost starting: 
	I0923 13:29:50.767175  700346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:29:50.767209  700346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:29:50.782907  700346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45583
	I0923 13:29:50.783448  700346 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:29:50.783998  700346 main.go:141] libmachine: Using API Version  1
	I0923 13:29:50.784019  700346 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:29:50.784343  700346 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:29:50.784533  700346 main.go:141] libmachine: (multinode-851928) Calling .DriverName
	I0923 13:29:50.784690  700346 main.go:141] libmachine: (multinode-851928) Calling .GetState
	I0923 13:29:50.786380  700346 fix.go:112] recreateIfNeeded on multinode-851928: state=Running err=<nil>
	W0923 13:29:50.786407  700346 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 13:29:50.788544  700346 out.go:177] * Updating the running kvm2 "multinode-851928" VM ...
	I0923 13:29:50.789950  700346 machine.go:93] provisionDockerMachine start ...
	I0923 13:29:50.789981  700346 main.go:141] libmachine: (multinode-851928) Calling .DriverName
	I0923 13:29:50.790263  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHHostname
	I0923 13:29:50.792868  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:50.793337  700346 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:29:50.793361  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:50.793583  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHPort
	I0923 13:29:50.793804  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:29:50.793957  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:29:50.794130  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHUsername
	I0923 13:29:50.794384  700346 main.go:141] libmachine: Using SSH client type: native
	I0923 13:29:50.794596  700346 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0923 13:29:50.794607  700346 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 13:29:50.899330  700346 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-851928
	
	I0923 13:29:50.899369  700346 main.go:141] libmachine: (multinode-851928) Calling .GetMachineName
	I0923 13:29:50.899721  700346 buildroot.go:166] provisioning hostname "multinode-851928"
	I0923 13:29:50.899760  700346 main.go:141] libmachine: (multinode-851928) Calling .GetMachineName
	I0923 13:29:50.899988  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHHostname
	I0923 13:29:50.902733  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:50.903132  700346 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:29:50.903175  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:50.903379  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHPort
	I0923 13:29:50.903606  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:29:50.903781  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:29:50.903879  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHUsername
	I0923 13:29:50.904048  700346 main.go:141] libmachine: Using SSH client type: native
	I0923 13:29:50.904293  700346 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0923 13:29:50.904313  700346 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-851928 && echo "multinode-851928" | sudo tee /etc/hostname
	I0923 13:29:51.024098  700346 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-851928
	
	I0923 13:29:51.024127  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHHostname
	I0923 13:29:51.027053  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:51.027450  700346 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:29:51.027498  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:51.027682  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHPort
	I0923 13:29:51.027891  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:29:51.028076  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:29:51.028341  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHUsername
	I0923 13:29:51.028596  700346 main.go:141] libmachine: Using SSH client type: native
	I0923 13:29:51.028843  700346 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0923 13:29:51.028862  700346 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-851928' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-851928/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-851928' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 13:29:51.131021  700346 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 13:29:51.131054  700346 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19690-662205/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-662205/.minikube}
	I0923 13:29:51.131079  700346 buildroot.go:174] setting up certificates
	I0923 13:29:51.131093  700346 provision.go:84] configureAuth start
	I0923 13:29:51.131108  700346 main.go:141] libmachine: (multinode-851928) Calling .GetMachineName
	I0923 13:29:51.131471  700346 main.go:141] libmachine: (multinode-851928) Calling .GetIP
	I0923 13:29:51.134297  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:51.134688  700346 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:29:51.134715  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:51.134821  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHHostname
	I0923 13:29:51.137369  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:51.137811  700346 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:29:51.137864  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:51.138046  700346 provision.go:143] copyHostCerts
	I0923 13:29:51.138098  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 13:29:51.138169  700346 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem, removing ...
	I0923 13:29:51.138192  700346 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 13:29:51.138311  700346 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem (1082 bytes)
	I0923 13:29:51.138453  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 13:29:51.138491  700346 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem, removing ...
	I0923 13:29:51.138503  700346 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 13:29:51.138550  700346 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem (1123 bytes)
	I0923 13:29:51.138641  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 13:29:51.138669  700346 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem, removing ...
	I0923 13:29:51.138680  700346 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 13:29:51.138721  700346 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem (1675 bytes)
	I0923 13:29:51.138817  700346 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem org=jenkins.multinode-851928 san=[127.0.0.1 192.168.39.168 localhost minikube multinode-851928]
	I0923 13:29:51.206163  700346 provision.go:177] copyRemoteCerts
	I0923 13:29:51.206254  700346 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 13:29:51.206281  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHHostname
	I0923 13:29:51.208845  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:51.209244  700346 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:29:51.209278  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:51.209579  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHPort
	I0923 13:29:51.209825  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:29:51.210069  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHUsername
	I0923 13:29:51.210251  700346 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/multinode-851928/id_rsa Username:docker}
	I0923 13:29:51.292053  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 13:29:51.292129  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 13:29:51.318552  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 13:29:51.318646  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0923 13:29:51.344325  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 13:29:51.344427  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 13:29:51.371790  700346 provision.go:87] duration metric: took 240.68148ms to configureAuth
	I0923 13:29:51.371826  700346 buildroot.go:189] setting minikube options for container-runtime
	I0923 13:29:51.372107  700346 config.go:182] Loaded profile config "multinode-851928": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:29:51.372210  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHHostname
	I0923 13:29:51.375425  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:51.375938  700346 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:29:51.375975  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:51.376145  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHPort
	I0923 13:29:51.376416  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:29:51.376607  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:29:51.376782  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHUsername
	I0923 13:29:51.377023  700346 main.go:141] libmachine: Using SSH client type: native
	I0923 13:29:51.377229  700346 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0923 13:29:51.377244  700346 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 13:31:22.203082  700346 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 13:31:22.203121  700346 machine.go:96] duration metric: took 1m31.413151929s to provisionDockerMachine
	I0923 13:31:22.203135  700346 start.go:293] postStartSetup for "multinode-851928" (driver="kvm2")
	I0923 13:31:22.203146  700346 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 13:31:22.203167  700346 main.go:141] libmachine: (multinode-851928) Calling .DriverName
	I0923 13:31:22.203560  700346 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 13:31:22.203601  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHHostname
	I0923 13:31:22.207072  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:31:22.207534  700346 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:31:22.207560  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:31:22.207768  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHPort
	I0923 13:31:22.208013  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:31:22.208168  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHUsername
	I0923 13:31:22.208297  700346 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/multinode-851928/id_rsa Username:docker}
	I0923 13:31:22.289623  700346 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 13:31:22.293999  700346 command_runner.go:130] > NAME=Buildroot
	I0923 13:31:22.294030  700346 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0923 13:31:22.294037  700346 command_runner.go:130] > ID=buildroot
	I0923 13:31:22.294063  700346 command_runner.go:130] > VERSION_ID=2023.02.9
	I0923 13:31:22.294072  700346 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0923 13:31:22.294115  700346 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 13:31:22.294130  700346 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/addons for local assets ...
	I0923 13:31:22.294196  700346 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/files for local assets ...
	I0923 13:31:22.294299  700346 filesync.go:149] local asset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> 6694472.pem in /etc/ssl/certs
	I0923 13:31:22.294316  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> /etc/ssl/certs/6694472.pem
	I0923 13:31:22.294403  700346 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 13:31:22.304117  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 13:31:22.327851  700346 start.go:296] duration metric: took 124.697164ms for postStartSetup
	I0923 13:31:22.327905  700346 fix.go:56] duration metric: took 1m31.560989633s for fixHost
	I0923 13:31:22.327937  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHHostname
	I0923 13:31:22.331086  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:31:22.331506  700346 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:31:22.331553  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:31:22.331646  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHPort
	I0923 13:31:22.331862  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:31:22.332012  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:31:22.332258  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHUsername
	I0923 13:31:22.332481  700346 main.go:141] libmachine: Using SSH client type: native
	I0923 13:31:22.332670  700346 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0923 13:31:22.332681  700346 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 13:31:22.434552  700346 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727098282.409817624
	
	I0923 13:31:22.434582  700346 fix.go:216] guest clock: 1727098282.409817624
	I0923 13:31:22.434593  700346 fix.go:229] Guest: 2024-09-23 13:31:22.409817624 +0000 UTC Remote: 2024-09-23 13:31:22.32791117 +0000 UTC m=+91.695981062 (delta=81.906454ms)
	I0923 13:31:22.434638  700346 fix.go:200] guest clock delta is within tolerance: 81.906454ms
	I0923 13:31:22.434646  700346 start.go:83] releasing machines lock for "multinode-851928", held for 1m31.667740609s
	I0923 13:31:22.434674  700346 main.go:141] libmachine: (multinode-851928) Calling .DriverName
	I0923 13:31:22.434963  700346 main.go:141] libmachine: (multinode-851928) Calling .GetIP
	I0923 13:31:22.437863  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:31:22.438337  700346 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:31:22.438370  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:31:22.438549  700346 main.go:141] libmachine: (multinode-851928) Calling .DriverName
	I0923 13:31:22.439075  700346 main.go:141] libmachine: (multinode-851928) Calling .DriverName
	I0923 13:31:22.439265  700346 main.go:141] libmachine: (multinode-851928) Calling .DriverName
	I0923 13:31:22.439367  700346 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 13:31:22.439418  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHHostname
	I0923 13:31:22.439554  700346 ssh_runner.go:195] Run: cat /version.json
	I0923 13:31:22.439578  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHHostname
	I0923 13:31:22.442584  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:31:22.442608  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:31:22.442989  700346 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:31:22.443020  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:31:22.443049  700346 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:31:22.443066  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:31:22.443188  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHPort
	I0923 13:31:22.443290  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHPort
	I0923 13:31:22.443382  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:31:22.443459  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:31:22.443531  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHUsername
	I0923 13:31:22.443589  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHUsername
	I0923 13:31:22.443682  700346 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/multinode-851928/id_rsa Username:docker}
	I0923 13:31:22.444020  700346 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/multinode-851928/id_rsa Username:docker}
	I0923 13:31:22.553213  700346 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0923 13:31:22.553953  700346 command_runner.go:130] > {"iso_version": "v1.34.0-1726784654-19672", "kicbase_version": "v0.0.45-1726589491-19662", "minikube_version": "v1.34.0", "commit": "342ed9b49b7fd0c6b2cb4410be5c5d5251f51ed8"}
	I0923 13:31:22.554166  700346 ssh_runner.go:195] Run: systemctl --version
	I0923 13:31:22.560194  700346 command_runner.go:130] > systemd 252 (252)
	I0923 13:31:22.560252  700346 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0923 13:31:22.560444  700346 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 13:31:22.724305  700346 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 13:31:22.730132  700346 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0923 13:31:22.730219  700346 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 13:31:22.730287  700346 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:31:22.740108  700346 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 13:31:22.740149  700346 start.go:495] detecting cgroup driver to use...
	I0923 13:31:22.740221  700346 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 13:31:22.757562  700346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:31:22.772194  700346 docker.go:217] disabling cri-docker service (if available) ...
	I0923 13:31:22.772259  700346 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 13:31:22.786916  700346 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 13:31:22.801506  700346 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 13:31:22.950545  700346 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 13:31:23.087585  700346 docker.go:233] disabling docker service ...
	I0923 13:31:23.087667  700346 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 13:31:23.104900  700346 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 13:31:23.118910  700346 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 13:31:23.259349  700346 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 13:31:23.402378  700346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 13:31:23.416478  700346 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:31:23.436070  700346 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0923 13:31:23.436118  700346 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 13:31:23.436181  700346 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:31:23.447083  700346 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 13:31:23.447167  700346 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:31:23.457730  700346 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:31:23.468411  700346 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:31:23.478917  700346 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 13:31:23.489541  700346 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:31:23.500253  700346 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:31:23.511127  700346 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:31:23.521795  700346 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 13:31:23.531323  700346 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0923 13:31:23.531470  700346 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 13:31:23.540851  700346 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:31:23.689553  700346 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 13:31:23.996997  700346 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 13:31:23.997071  700346 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 13:31:24.001998  700346 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0923 13:31:24.002037  700346 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0923 13:31:24.002052  700346 command_runner.go:130] > Device: 0,22	Inode: 1323        Links: 1
	I0923 13:31:24.002061  700346 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0923 13:31:24.002067  700346 command_runner.go:130] > Access: 2024-09-23 13:31:23.943333650 +0000
	I0923 13:31:24.002079  700346 command_runner.go:130] > Modify: 2024-09-23 13:31:23.846331146 +0000
	I0923 13:31:24.002086  700346 command_runner.go:130] > Change: 2024-09-23 13:31:23.846331146 +0000
	I0923 13:31:24.002092  700346 command_runner.go:130] >  Birth: -
	I0923 13:31:24.002146  700346 start.go:563] Will wait 60s for crictl version
	I0923 13:31:24.002204  700346 ssh_runner.go:195] Run: which crictl
	I0923 13:31:24.005812  700346 command_runner.go:130] > /usr/bin/crictl
	I0923 13:31:24.005921  700346 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 13:31:24.043295  700346 command_runner.go:130] > Version:  0.1.0
	I0923 13:31:24.043330  700346 command_runner.go:130] > RuntimeName:  cri-o
	I0923 13:31:24.043335  700346 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0923 13:31:24.043341  700346 command_runner.go:130] > RuntimeApiVersion:  v1
	I0923 13:31:24.046822  700346 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 13:31:24.046905  700346 ssh_runner.go:195] Run: crio --version
	I0923 13:31:24.075309  700346 command_runner.go:130] > crio version 1.29.1
	I0923 13:31:24.075335  700346 command_runner.go:130] > Version:        1.29.1
	I0923 13:31:24.075340  700346 command_runner.go:130] > GitCommit:      unknown
	I0923 13:31:24.075344  700346 command_runner.go:130] > GitCommitDate:  unknown
	I0923 13:31:24.075348  700346 command_runner.go:130] > GitTreeState:   clean
	I0923 13:31:24.075354  700346 command_runner.go:130] > BuildDate:      2024-09-20T03:55:27Z
	I0923 13:31:24.075358  700346 command_runner.go:130] > GoVersion:      go1.21.6
	I0923 13:31:24.075362  700346 command_runner.go:130] > Compiler:       gc
	I0923 13:31:24.075366  700346 command_runner.go:130] > Platform:       linux/amd64
	I0923 13:31:24.075370  700346 command_runner.go:130] > Linkmode:       dynamic
	I0923 13:31:24.075395  700346 command_runner.go:130] > BuildTags:      
	I0923 13:31:24.075400  700346 command_runner.go:130] >   containers_image_ostree_stub
	I0923 13:31:24.075404  700346 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0923 13:31:24.075407  700346 command_runner.go:130] >   btrfs_noversion
	I0923 13:31:24.075412  700346 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0923 13:31:24.075415  700346 command_runner.go:130] >   libdm_no_deferred_remove
	I0923 13:31:24.075419  700346 command_runner.go:130] >   seccomp
	I0923 13:31:24.075423  700346 command_runner.go:130] > LDFlags:          unknown
	I0923 13:31:24.075430  700346 command_runner.go:130] > SeccompEnabled:   true
	I0923 13:31:24.075434  700346 command_runner.go:130] > AppArmorEnabled:  false
	I0923 13:31:24.076544  700346 ssh_runner.go:195] Run: crio --version
	I0923 13:31:24.104674  700346 command_runner.go:130] > crio version 1.29.1
	I0923 13:31:24.104701  700346 command_runner.go:130] > Version:        1.29.1
	I0923 13:31:24.104707  700346 command_runner.go:130] > GitCommit:      unknown
	I0923 13:31:24.104711  700346 command_runner.go:130] > GitCommitDate:  unknown
	I0923 13:31:24.104715  700346 command_runner.go:130] > GitTreeState:   clean
	I0923 13:31:24.104721  700346 command_runner.go:130] > BuildDate:      2024-09-20T03:55:27Z
	I0923 13:31:24.104725  700346 command_runner.go:130] > GoVersion:      go1.21.6
	I0923 13:31:24.104729  700346 command_runner.go:130] > Compiler:       gc
	I0923 13:31:24.104733  700346 command_runner.go:130] > Platform:       linux/amd64
	I0923 13:31:24.104737  700346 command_runner.go:130] > Linkmode:       dynamic
	I0923 13:31:24.104741  700346 command_runner.go:130] > BuildTags:      
	I0923 13:31:24.104746  700346 command_runner.go:130] >   containers_image_ostree_stub
	I0923 13:31:24.104750  700346 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0923 13:31:24.104753  700346 command_runner.go:130] >   btrfs_noversion
	I0923 13:31:24.104757  700346 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0923 13:31:24.104761  700346 command_runner.go:130] >   libdm_no_deferred_remove
	I0923 13:31:24.104764  700346 command_runner.go:130] >   seccomp
	I0923 13:31:24.104768  700346 command_runner.go:130] > LDFlags:          unknown
	I0923 13:31:24.104772  700346 command_runner.go:130] > SeccompEnabled:   true
	I0923 13:31:24.104776  700346 command_runner.go:130] > AppArmorEnabled:  false
	I0923 13:31:24.107554  700346 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 13:31:24.108884  700346 main.go:141] libmachine: (multinode-851928) Calling .GetIP
	I0923 13:31:24.111857  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:31:24.112232  700346 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:31:24.112272  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:31:24.112473  700346 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 13:31:24.116506  700346 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0923 13:31:24.116652  700346 kubeadm.go:883] updating cluster {Name:multinode-851928 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-851928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.25 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 13:31:24.116869  700346 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 13:31:24.116944  700346 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 13:31:24.157804  700346 command_runner.go:130] > {
	I0923 13:31:24.157856  700346 command_runner.go:130] >   "images": [
	I0923 13:31:24.157863  700346 command_runner.go:130] >     {
	I0923 13:31:24.157875  700346 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0923 13:31:24.157881  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.157895  700346 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0923 13:31:24.157900  700346 command_runner.go:130] >       ],
	I0923 13:31:24.157905  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.157917  700346 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0923 13:31:24.157926  700346 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0923 13:31:24.157931  700346 command_runner.go:130] >       ],
	I0923 13:31:24.157939  700346 command_runner.go:130] >       "size": "87190579",
	I0923 13:31:24.157943  700346 command_runner.go:130] >       "uid": null,
	I0923 13:31:24.157948  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.157958  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.157965  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.157969  700346 command_runner.go:130] >     },
	I0923 13:31:24.157972  700346 command_runner.go:130] >     {
	I0923 13:31:24.157980  700346 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0923 13:31:24.157985  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.157991  700346 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0923 13:31:24.157994  700346 command_runner.go:130] >       ],
	I0923 13:31:24.157999  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.158008  700346 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0923 13:31:24.158018  700346 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0923 13:31:24.158023  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158027  700346 command_runner.go:130] >       "size": "1363676",
	I0923 13:31:24.158031  700346 command_runner.go:130] >       "uid": null,
	I0923 13:31:24.158041  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.158048  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.158059  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.158066  700346 command_runner.go:130] >     },
	I0923 13:31:24.158070  700346 command_runner.go:130] >     {
	I0923 13:31:24.158077  700346 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0923 13:31:24.158082  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.158087  700346 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0923 13:31:24.158092  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158096  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.158103  700346 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0923 13:31:24.158111  700346 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0923 13:31:24.158115  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158122  700346 command_runner.go:130] >       "size": "31470524",
	I0923 13:31:24.158126  700346 command_runner.go:130] >       "uid": null,
	I0923 13:31:24.158131  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.158136  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.158141  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.158147  700346 command_runner.go:130] >     },
	I0923 13:31:24.158151  700346 command_runner.go:130] >     {
	I0923 13:31:24.158160  700346 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0923 13:31:24.158164  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.158171  700346 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0923 13:31:24.158175  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158181  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.158188  700346 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0923 13:31:24.158205  700346 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0923 13:31:24.158212  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158216  700346 command_runner.go:130] >       "size": "63273227",
	I0923 13:31:24.158228  700346 command_runner.go:130] >       "uid": null,
	I0923 13:31:24.158233  700346 command_runner.go:130] >       "username": "nonroot",
	I0923 13:31:24.158238  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.158242  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.158247  700346 command_runner.go:130] >     },
	I0923 13:31:24.158251  700346 command_runner.go:130] >     {
	I0923 13:31:24.158266  700346 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0923 13:31:24.158273  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.158278  700346 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0923 13:31:24.158282  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158291  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.158301  700346 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0923 13:31:24.158309  700346 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0923 13:31:24.158314  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158319  700346 command_runner.go:130] >       "size": "149009664",
	I0923 13:31:24.158325  700346 command_runner.go:130] >       "uid": {
	I0923 13:31:24.158329  700346 command_runner.go:130] >         "value": "0"
	I0923 13:31:24.158333  700346 command_runner.go:130] >       },
	I0923 13:31:24.158337  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.158341  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.158345  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.158349  700346 command_runner.go:130] >     },
	I0923 13:31:24.158356  700346 command_runner.go:130] >     {
	I0923 13:31:24.158365  700346 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0923 13:31:24.158369  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.158375  700346 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0923 13:31:24.158381  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158385  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.158393  700346 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0923 13:31:24.158403  700346 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0923 13:31:24.158407  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158411  700346 command_runner.go:130] >       "size": "95237600",
	I0923 13:31:24.158415  700346 command_runner.go:130] >       "uid": {
	I0923 13:31:24.158418  700346 command_runner.go:130] >         "value": "0"
	I0923 13:31:24.158422  700346 command_runner.go:130] >       },
	I0923 13:31:24.158427  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.158431  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.158437  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.158441  700346 command_runner.go:130] >     },
	I0923 13:31:24.158453  700346 command_runner.go:130] >     {
	I0923 13:31:24.158463  700346 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0923 13:31:24.158470  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.158479  700346 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0923 13:31:24.158487  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158491  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.158499  700346 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0923 13:31:24.158509  700346 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0923 13:31:24.158516  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158520  700346 command_runner.go:130] >       "size": "89437508",
	I0923 13:31:24.158527  700346 command_runner.go:130] >       "uid": {
	I0923 13:31:24.158531  700346 command_runner.go:130] >         "value": "0"
	I0923 13:31:24.158535  700346 command_runner.go:130] >       },
	I0923 13:31:24.158539  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.158543  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.158549  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.158553  700346 command_runner.go:130] >     },
	I0923 13:31:24.158557  700346 command_runner.go:130] >     {
	I0923 13:31:24.158563  700346 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0923 13:31:24.158572  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.158577  700346 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0923 13:31:24.158581  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158585  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.158611  700346 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0923 13:31:24.158626  700346 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0923 13:31:24.158633  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158639  700346 command_runner.go:130] >       "size": "92733849",
	I0923 13:31:24.158649  700346 command_runner.go:130] >       "uid": null,
	I0923 13:31:24.158659  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.158665  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.158673  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.158680  700346 command_runner.go:130] >     },
	I0923 13:31:24.158685  700346 command_runner.go:130] >     {
	I0923 13:31:24.158699  700346 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0923 13:31:24.158703  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.158708  700346 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0923 13:31:24.158712  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158720  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.158732  700346 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0923 13:31:24.158747  700346 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0923 13:31:24.158758  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158766  700346 command_runner.go:130] >       "size": "68420934",
	I0923 13:31:24.158773  700346 command_runner.go:130] >       "uid": {
	I0923 13:31:24.158783  700346 command_runner.go:130] >         "value": "0"
	I0923 13:31:24.158789  700346 command_runner.go:130] >       },
	I0923 13:31:24.158800  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.158810  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.158817  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.158826  700346 command_runner.go:130] >     },
	I0923 13:31:24.158832  700346 command_runner.go:130] >     {
	I0923 13:31:24.158845  700346 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0923 13:31:24.158854  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.158862  700346 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0923 13:31:24.158873  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158880  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.158895  700346 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0923 13:31:24.158916  700346 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0923 13:31:24.158926  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158934  700346 command_runner.go:130] >       "size": "742080",
	I0923 13:31:24.158944  700346 command_runner.go:130] >       "uid": {
	I0923 13:31:24.158952  700346 command_runner.go:130] >         "value": "65535"
	I0923 13:31:24.158962  700346 command_runner.go:130] >       },
	I0923 13:31:24.158970  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.158980  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.158987  700346 command_runner.go:130] >       "pinned": true
	I0923 13:31:24.158997  700346 command_runner.go:130] >     }
	I0923 13:31:24.159015  700346 command_runner.go:130] >   ]
	I0923 13:31:24.159026  700346 command_runner.go:130] > }
	I0923 13:31:24.159299  700346 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 13:31:24.159318  700346 crio.go:433] Images already preloaded, skipping extraction
	I0923 13:31:24.159374  700346 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 13:31:24.192044  700346 command_runner.go:130] > {
	I0923 13:31:24.192081  700346 command_runner.go:130] >   "images": [
	I0923 13:31:24.192089  700346 command_runner.go:130] >     {
	I0923 13:31:24.192102  700346 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0923 13:31:24.192109  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.192120  700346 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0923 13:31:24.192127  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192139  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.192152  700346 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0923 13:31:24.192174  700346 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0923 13:31:24.192185  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192194  700346 command_runner.go:130] >       "size": "87190579",
	I0923 13:31:24.192202  700346 command_runner.go:130] >       "uid": null,
	I0923 13:31:24.192207  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.192234  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.192245  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.192250  700346 command_runner.go:130] >     },
	I0923 13:31:24.192254  700346 command_runner.go:130] >     {
	I0923 13:31:24.192259  700346 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0923 13:31:24.192266  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.192276  700346 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0923 13:31:24.192284  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192289  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.192301  700346 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0923 13:31:24.192308  700346 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0923 13:31:24.192314  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192319  700346 command_runner.go:130] >       "size": "1363676",
	I0923 13:31:24.192326  700346 command_runner.go:130] >       "uid": null,
	I0923 13:31:24.192333  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.192340  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.192344  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.192347  700346 command_runner.go:130] >     },
	I0923 13:31:24.192351  700346 command_runner.go:130] >     {
	I0923 13:31:24.192357  700346 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0923 13:31:24.192364  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.192370  700346 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0923 13:31:24.192374  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192378  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.192387  700346 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0923 13:31:24.192394  700346 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0923 13:31:24.192400  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192404  700346 command_runner.go:130] >       "size": "31470524",
	I0923 13:31:24.192410  700346 command_runner.go:130] >       "uid": null,
	I0923 13:31:24.192416  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.192420  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.192424  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.192430  700346 command_runner.go:130] >     },
	I0923 13:31:24.192433  700346 command_runner.go:130] >     {
	I0923 13:31:24.192439  700346 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0923 13:31:24.192446  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.192451  700346 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0923 13:31:24.192457  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192461  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.192475  700346 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0923 13:31:24.192487  700346 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0923 13:31:24.192494  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192498  700346 command_runner.go:130] >       "size": "63273227",
	I0923 13:31:24.192504  700346 command_runner.go:130] >       "uid": null,
	I0923 13:31:24.192510  700346 command_runner.go:130] >       "username": "nonroot",
	I0923 13:31:24.192522  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.192533  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.192539  700346 command_runner.go:130] >     },
	I0923 13:31:24.192543  700346 command_runner.go:130] >     {
	I0923 13:31:24.192548  700346 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0923 13:31:24.192555  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.192560  700346 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0923 13:31:24.192564  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192570  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.192576  700346 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0923 13:31:24.192585  700346 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0923 13:31:24.192589  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192594  700346 command_runner.go:130] >       "size": "149009664",
	I0923 13:31:24.192597  700346 command_runner.go:130] >       "uid": {
	I0923 13:31:24.192602  700346 command_runner.go:130] >         "value": "0"
	I0923 13:31:24.192605  700346 command_runner.go:130] >       },
	I0923 13:31:24.192611  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.192617  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.192621  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.192625  700346 command_runner.go:130] >     },
	I0923 13:31:24.192631  700346 command_runner.go:130] >     {
	I0923 13:31:24.192638  700346 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0923 13:31:24.192644  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.192649  700346 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0923 13:31:24.192652  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192660  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.192668  700346 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0923 13:31:24.192677  700346 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0923 13:31:24.192681  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192685  700346 command_runner.go:130] >       "size": "95237600",
	I0923 13:31:24.192692  700346 command_runner.go:130] >       "uid": {
	I0923 13:31:24.192696  700346 command_runner.go:130] >         "value": "0"
	I0923 13:31:24.192700  700346 command_runner.go:130] >       },
	I0923 13:31:24.192705  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.192709  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.192713  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.192719  700346 command_runner.go:130] >     },
	I0923 13:31:24.192723  700346 command_runner.go:130] >     {
	I0923 13:31:24.192729  700346 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0923 13:31:24.192736  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.192741  700346 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0923 13:31:24.192747  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192751  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.192761  700346 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0923 13:31:24.192772  700346 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0923 13:31:24.192781  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192786  700346 command_runner.go:130] >       "size": "89437508",
	I0923 13:31:24.192793  700346 command_runner.go:130] >       "uid": {
	I0923 13:31:24.192798  700346 command_runner.go:130] >         "value": "0"
	I0923 13:31:24.192808  700346 command_runner.go:130] >       },
	I0923 13:31:24.192812  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.192816  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.192821  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.192825  700346 command_runner.go:130] >     },
	I0923 13:31:24.192828  700346 command_runner.go:130] >     {
	I0923 13:31:24.192834  700346 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0923 13:31:24.192841  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.192846  700346 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0923 13:31:24.192851  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192856  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.192870  700346 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0923 13:31:24.192880  700346 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0923 13:31:24.192884  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192888  700346 command_runner.go:130] >       "size": "92733849",
	I0923 13:31:24.192894  700346 command_runner.go:130] >       "uid": null,
	I0923 13:31:24.192904  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.192911  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.192925  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.192934  700346 command_runner.go:130] >     },
	I0923 13:31:24.192940  700346 command_runner.go:130] >     {
	I0923 13:31:24.192952  700346 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0923 13:31:24.192963  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.192972  700346 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0923 13:31:24.192985  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192992  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.193004  700346 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0923 13:31:24.193014  700346 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0923 13:31:24.193022  700346 command_runner.go:130] >       ],
	I0923 13:31:24.193026  700346 command_runner.go:130] >       "size": "68420934",
	I0923 13:31:24.193032  700346 command_runner.go:130] >       "uid": {
	I0923 13:31:24.193036  700346 command_runner.go:130] >         "value": "0"
	I0923 13:31:24.193041  700346 command_runner.go:130] >       },
	I0923 13:31:24.193052  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.193059  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.193064  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.193068  700346 command_runner.go:130] >     },
	I0923 13:31:24.193072  700346 command_runner.go:130] >     {
	I0923 13:31:24.193077  700346 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0923 13:31:24.193084  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.193089  700346 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0923 13:31:24.193093  700346 command_runner.go:130] >       ],
	I0923 13:31:24.193097  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.193106  700346 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0923 13:31:24.193116  700346 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0923 13:31:24.193124  700346 command_runner.go:130] >       ],
	I0923 13:31:24.193128  700346 command_runner.go:130] >       "size": "742080",
	I0923 13:31:24.193135  700346 command_runner.go:130] >       "uid": {
	I0923 13:31:24.193138  700346 command_runner.go:130] >         "value": "65535"
	I0923 13:31:24.193142  700346 command_runner.go:130] >       },
	I0923 13:31:24.193147  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.193150  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.193155  700346 command_runner.go:130] >       "pinned": true
	I0923 13:31:24.193158  700346 command_runner.go:130] >     }
	I0923 13:31:24.193162  700346 command_runner.go:130] >   ]
	I0923 13:31:24.193165  700346 command_runner.go:130] > }
	I0923 13:31:24.193305  700346 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 13:31:24.193319  700346 cache_images.go:84] Images are preloaded, skipping loading
	I0923 13:31:24.193327  700346 kubeadm.go:934] updating node { 192.168.39.168 8443 v1.31.1 crio true true} ...
	I0923 13:31:24.193435  700346 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-851928 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-851928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 13:31:24.193517  700346 ssh_runner.go:195] Run: crio config
	I0923 13:31:24.240659  700346 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0923 13:31:24.240695  700346 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0923 13:31:24.240707  700346 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0923 13:31:24.240712  700346 command_runner.go:130] > #
	I0923 13:31:24.240722  700346 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0923 13:31:24.240731  700346 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0923 13:31:24.240744  700346 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0923 13:31:24.240795  700346 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0923 13:31:24.240807  700346 command_runner.go:130] > # reload'.
	I0923 13:31:24.240816  700346 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0923 13:31:24.240829  700346 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0923 13:31:24.240845  700346 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0923 13:31:24.240855  700346 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0923 13:31:24.240859  700346 command_runner.go:130] > [crio]
	I0923 13:31:24.240868  700346 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0923 13:31:24.240877  700346 command_runner.go:130] > # containers images, in this directory.
	I0923 13:31:24.240884  700346 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0923 13:31:24.240903  700346 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0923 13:31:24.240915  700346 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0923 13:31:24.240926  700346 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0923 13:31:24.240932  700346 command_runner.go:130] > # imagestore = ""
	I0923 13:31:24.240949  700346 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0923 13:31:24.240963  700346 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0923 13:31:24.240973  700346 command_runner.go:130] > storage_driver = "overlay"
	I0923 13:31:24.240981  700346 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0923 13:31:24.240992  700346 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0923 13:31:24.241002  700346 command_runner.go:130] > storage_option = [
	I0923 13:31:24.241009  700346 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0923 13:31:24.241028  700346 command_runner.go:130] > ]
	I0923 13:31:24.241039  700346 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0923 13:31:24.241050  700346 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0923 13:31:24.241061  700346 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0923 13:31:24.241074  700346 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0923 13:31:24.241088  700346 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0923 13:31:24.241097  700346 command_runner.go:130] > # always happen on a node reboot
	I0923 13:31:24.241105  700346 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0923 13:31:24.241128  700346 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0923 13:31:24.241140  700346 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0923 13:31:24.241147  700346 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0923 13:31:24.241158  700346 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0923 13:31:24.241172  700346 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0923 13:31:24.241186  700346 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0923 13:31:24.241197  700346 command_runner.go:130] > # internal_wipe = true
	I0923 13:31:24.241209  700346 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0923 13:31:24.241221  700346 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0923 13:31:24.241229  700346 command_runner.go:130] > # internal_repair = false
	I0923 13:31:24.241246  700346 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0923 13:31:24.241260  700346 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0923 13:31:24.241271  700346 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0923 13:31:24.241287  700346 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0923 13:31:24.241299  700346 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0923 13:31:24.241308  700346 command_runner.go:130] > [crio.api]
	I0923 13:31:24.241317  700346 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0923 13:31:24.241328  700346 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0923 13:31:24.241342  700346 command_runner.go:130] > # IP address on which the stream server will listen.
	I0923 13:31:24.241348  700346 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0923 13:31:24.241362  700346 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0923 13:31:24.241373  700346 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0923 13:31:24.241379  700346 command_runner.go:130] > # stream_port = "0"
	I0923 13:31:24.241391  700346 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0923 13:31:24.241403  700346 command_runner.go:130] > # stream_enable_tls = false
	I0923 13:31:24.241427  700346 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0923 13:31:24.241443  700346 command_runner.go:130] > # stream_idle_timeout = ""
	I0923 13:31:24.241454  700346 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0923 13:31:24.241466  700346 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0923 13:31:24.241474  700346 command_runner.go:130] > # minutes.
	I0923 13:31:24.241486  700346 command_runner.go:130] > # stream_tls_cert = ""
	I0923 13:31:24.241498  700346 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0923 13:31:24.241511  700346 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0923 13:31:24.241517  700346 command_runner.go:130] > # stream_tls_key = ""
	I0923 13:31:24.241526  700346 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0923 13:31:24.241538  700346 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0923 13:31:24.241568  700346 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0923 13:31:24.241580  700346 command_runner.go:130] > # stream_tls_ca = ""
	I0923 13:31:24.241592  700346 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0923 13:31:24.241601  700346 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0923 13:31:24.241612  700346 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0923 13:31:24.241621  700346 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0923 13:31:24.241631  700346 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0923 13:31:24.241643  700346 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0923 13:31:24.241650  700346 command_runner.go:130] > [crio.runtime]
	I0923 13:31:24.241661  700346 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0923 13:31:24.241672  700346 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0923 13:31:24.241678  700346 command_runner.go:130] > # "nofile=1024:2048"
	I0923 13:31:24.241693  700346 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0923 13:31:24.241700  700346 command_runner.go:130] > # default_ulimits = [
	I0923 13:31:24.241705  700346 command_runner.go:130] > # ]
	I0923 13:31:24.241714  700346 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0923 13:31:24.241725  700346 command_runner.go:130] > # no_pivot = false
	I0923 13:31:24.241734  700346 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0923 13:31:24.241744  700346 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0923 13:31:24.241751  700346 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0923 13:31:24.241767  700346 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0923 13:31:24.241779  700346 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0923 13:31:24.241800  700346 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0923 13:31:24.241811  700346 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0923 13:31:24.241818  700346 command_runner.go:130] > # Cgroup setting for conmon
	I0923 13:31:24.241849  700346 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0923 13:31:24.241858  700346 command_runner.go:130] > conmon_cgroup = "pod"
	I0923 13:31:24.241872  700346 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0923 13:31:24.241881  700346 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0923 13:31:24.241894  700346 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0923 13:31:24.241903  700346 command_runner.go:130] > conmon_env = [
	I0923 13:31:24.241915  700346 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0923 13:31:24.241925  700346 command_runner.go:130] > ]
	I0923 13:31:24.241934  700346 command_runner.go:130] > # Additional environment variables to set for all the
	I0923 13:31:24.241945  700346 command_runner.go:130] > # containers. These are overridden if set in the
	I0923 13:31:24.241954  700346 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0923 13:31:24.241962  700346 command_runner.go:130] > # default_env = [
	I0923 13:31:24.241968  700346 command_runner.go:130] > # ]
	I0923 13:31:24.241979  700346 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0923 13:31:24.241998  700346 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0923 13:31:24.242006  700346 command_runner.go:130] > # selinux = false
	I0923 13:31:24.242016  700346 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0923 13:31:24.242031  700346 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0923 13:31:24.242044  700346 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0923 13:31:24.242050  700346 command_runner.go:130] > # seccomp_profile = ""
	I0923 13:31:24.242063  700346 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0923 13:31:24.242074  700346 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0923 13:31:24.242086  700346 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0923 13:31:24.242094  700346 command_runner.go:130] > # which might increase security.
	I0923 13:31:24.242103  700346 command_runner.go:130] > # This option is currently deprecated,
	I0923 13:31:24.242113  700346 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0923 13:31:24.242124  700346 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0923 13:31:24.242136  700346 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0923 13:31:24.242147  700346 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0923 13:31:24.242161  700346 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0923 13:31:24.242185  700346 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0923 13:31:24.242198  700346 command_runner.go:130] > # This option supports live configuration reload.
	I0923 13:31:24.242211  700346 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0923 13:31:24.242221  700346 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0923 13:31:24.242227  700346 command_runner.go:130] > # the cgroup blockio controller.
	I0923 13:31:24.242236  700346 command_runner.go:130] > # blockio_config_file = ""
	I0923 13:31:24.242245  700346 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0923 13:31:24.242254  700346 command_runner.go:130] > # blockio parameters.
	I0923 13:31:24.242260  700346 command_runner.go:130] > # blockio_reload = false
	I0923 13:31:24.242272  700346 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0923 13:31:24.242281  700346 command_runner.go:130] > # irqbalance daemon.
	I0923 13:31:24.242288  700346 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0923 13:31:24.242300  700346 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0923 13:31:24.242311  700346 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0923 13:31:24.242324  700346 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0923 13:31:24.242333  700346 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0923 13:31:24.242348  700346 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0923 13:31:24.242359  700346 command_runner.go:130] > # This option supports live configuration reload.
	I0923 13:31:24.242368  700346 command_runner.go:130] > # rdt_config_file = ""
	I0923 13:31:24.242376  700346 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0923 13:31:24.242385  700346 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0923 13:31:24.242430  700346 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0923 13:31:24.242441  700346 command_runner.go:130] > # separate_pull_cgroup = ""
	I0923 13:31:24.242451  700346 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0923 13:31:24.242463  700346 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0923 13:31:24.242473  700346 command_runner.go:130] > # will be added.
	I0923 13:31:24.242484  700346 command_runner.go:130] > # default_capabilities = [
	I0923 13:31:24.242493  700346 command_runner.go:130] > # 	"CHOWN",
	I0923 13:31:24.242499  700346 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0923 13:31:24.242510  700346 command_runner.go:130] > # 	"FSETID",
	I0923 13:31:24.242516  700346 command_runner.go:130] > # 	"FOWNER",
	I0923 13:31:24.242524  700346 command_runner.go:130] > # 	"SETGID",
	I0923 13:31:24.242529  700346 command_runner.go:130] > # 	"SETUID",
	I0923 13:31:24.242546  700346 command_runner.go:130] > # 	"SETPCAP",
	I0923 13:31:24.242555  700346 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0923 13:31:24.242561  700346 command_runner.go:130] > # 	"KILL",
	I0923 13:31:24.242567  700346 command_runner.go:130] > # ]
	I0923 13:31:24.242579  700346 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0923 13:31:24.242590  700346 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0923 13:31:24.242601  700346 command_runner.go:130] > # add_inheritable_capabilities = false
	I0923 13:31:24.242610  700346 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0923 13:31:24.242622  700346 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0923 13:31:24.242631  700346 command_runner.go:130] > default_sysctls = [
	I0923 13:31:24.242642  700346 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0923 13:31:24.242650  700346 command_runner.go:130] > ]
	I0923 13:31:24.242657  700346 command_runner.go:130] > # List of devices on the host that a
	I0923 13:31:24.242670  700346 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0923 13:31:24.242678  700346 command_runner.go:130] > # allowed_devices = [
	I0923 13:31:24.242684  700346 command_runner.go:130] > # 	"/dev/fuse",
	I0923 13:31:24.242692  700346 command_runner.go:130] > # ]
	I0923 13:31:24.242701  700346 command_runner.go:130] > # List of additional devices. specified as
	I0923 13:31:24.242714  700346 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0923 13:31:24.242727  700346 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0923 13:31:24.242739  700346 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0923 13:31:24.242745  700346 command_runner.go:130] > # additional_devices = [
	I0923 13:31:24.242759  700346 command_runner.go:130] > # ]
	I0923 13:31:24.242771  700346 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0923 13:31:24.242777  700346 command_runner.go:130] > # cdi_spec_dirs = [
	I0923 13:31:24.242783  700346 command_runner.go:130] > # 	"/etc/cdi",
	I0923 13:31:24.242793  700346 command_runner.go:130] > # 	"/var/run/cdi",
	I0923 13:31:24.242798  700346 command_runner.go:130] > # ]
	I0923 13:31:24.242812  700346 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0923 13:31:24.242824  700346 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0923 13:31:24.242837  700346 command_runner.go:130] > # Defaults to false.
	I0923 13:31:24.242844  700346 command_runner.go:130] > # device_ownership_from_security_context = false
	I0923 13:31:24.242857  700346 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0923 13:31:24.242876  700346 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0923 13:31:24.242885  700346 command_runner.go:130] > # hooks_dir = [
	I0923 13:31:24.242893  700346 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0923 13:31:24.242900  700346 command_runner.go:130] > # ]
	I0923 13:31:24.242909  700346 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0923 13:31:24.242921  700346 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0923 13:31:24.242932  700346 command_runner.go:130] > # its default mounts from the following two files:
	I0923 13:31:24.242941  700346 command_runner.go:130] > #
	I0923 13:31:24.242950  700346 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0923 13:31:24.242962  700346 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0923 13:31:24.242975  700346 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0923 13:31:24.242984  700346 command_runner.go:130] > #
	I0923 13:31:24.242993  700346 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0923 13:31:24.243005  700346 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0923 13:31:24.243019  700346 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0923 13:31:24.243028  700346 command_runner.go:130] > #      only add mounts it finds in this file.
	I0923 13:31:24.243036  700346 command_runner.go:130] > #
	I0923 13:31:24.243043  700346 command_runner.go:130] > # default_mounts_file = ""
	I0923 13:31:24.243054  700346 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0923 13:31:24.243071  700346 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0923 13:31:24.243080  700346 command_runner.go:130] > pids_limit = 1024
	I0923 13:31:24.243091  700346 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0923 13:31:24.243104  700346 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0923 13:31:24.243117  700346 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0923 13:31:24.243129  700346 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0923 13:31:24.243138  700346 command_runner.go:130] > # log_size_max = -1
	I0923 13:31:24.243149  700346 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0923 13:31:24.243158  700346 command_runner.go:130] > # log_to_journald = false
	I0923 13:31:24.243167  700346 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0923 13:31:24.243178  700346 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0923 13:31:24.243185  700346 command_runner.go:130] > # Path to directory for container attach sockets.
	I0923 13:31:24.243196  700346 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0923 13:31:24.243205  700346 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0923 13:31:24.243221  700346 command_runner.go:130] > # bind_mount_prefix = ""
	I0923 13:31:24.243233  700346 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0923 13:31:24.243244  700346 command_runner.go:130] > # read_only = false
	I0923 13:31:24.243254  700346 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0923 13:31:24.243266  700346 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0923 13:31:24.243275  700346 command_runner.go:130] > # live configuration reload.
	I0923 13:31:24.243281  700346 command_runner.go:130] > # log_level = "info"
	I0923 13:31:24.243292  700346 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0923 13:31:24.243302  700346 command_runner.go:130] > # This option supports live configuration reload.
	I0923 13:31:24.243311  700346 command_runner.go:130] > # log_filter = ""
	I0923 13:31:24.243320  700346 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0923 13:31:24.243332  700346 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0923 13:31:24.243341  700346 command_runner.go:130] > # separated by comma.
	I0923 13:31:24.243351  700346 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0923 13:31:24.243361  700346 command_runner.go:130] > # uid_mappings = ""
	I0923 13:31:24.243369  700346 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0923 13:31:24.243382  700346 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0923 13:31:24.243389  700346 command_runner.go:130] > # separated by comma.
	I0923 13:31:24.243401  700346 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0923 13:31:24.243410  700346 command_runner.go:130] > # gid_mappings = ""
	I0923 13:31:24.243420  700346 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0923 13:31:24.243435  700346 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0923 13:31:24.243451  700346 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0923 13:31:24.243466  700346 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0923 13:31:24.243490  700346 command_runner.go:130] > # minimum_mappable_uid = -1
	I0923 13:31:24.243503  700346 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0923 13:31:24.243516  700346 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0923 13:31:24.243528  700346 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0923 13:31:24.243540  700346 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0923 13:31:24.243550  700346 command_runner.go:130] > # minimum_mappable_gid = -1
	I0923 13:31:24.243558  700346 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0923 13:31:24.243568  700346 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0923 13:31:24.243578  700346 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0923 13:31:24.243594  700346 command_runner.go:130] > # ctr_stop_timeout = 30
	I0923 13:31:24.243606  700346 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0923 13:31:24.243621  700346 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0923 13:31:24.243633  700346 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0923 13:31:24.243643  700346 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0923 13:31:24.243649  700346 command_runner.go:130] > drop_infra_ctr = false
	I0923 13:31:24.243661  700346 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0923 13:31:24.243673  700346 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0923 13:31:24.243685  700346 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0923 13:31:24.243694  700346 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0923 13:31:24.243706  700346 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0923 13:31:24.243718  700346 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0923 13:31:24.243732  700346 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0923 13:31:24.243743  700346 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0923 13:31:24.243753  700346 command_runner.go:130] > # shared_cpuset = ""
	I0923 13:31:24.243765  700346 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0923 13:31:24.243774  700346 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0923 13:31:24.243785  700346 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0923 13:31:24.243797  700346 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0923 13:31:24.243806  700346 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0923 13:31:24.243816  700346 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0923 13:31:24.243829  700346 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0923 13:31:24.243838  700346 command_runner.go:130] > # enable_criu_support = false
	I0923 13:31:24.243847  700346 command_runner.go:130] > # Enable/disable the generation of the container,
	I0923 13:31:24.243864  700346 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0923 13:31:24.243874  700346 command_runner.go:130] > # enable_pod_events = false
	I0923 13:31:24.243884  700346 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0923 13:31:24.243897  700346 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0923 13:31:24.243906  700346 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0923 13:31:24.243915  700346 command_runner.go:130] > # default_runtime = "runc"
	I0923 13:31:24.243923  700346 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0923 13:31:24.243937  700346 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0923 13:31:24.243958  700346 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0923 13:31:24.243975  700346 command_runner.go:130] > # creation as a file is not desired either.
	I0923 13:31:24.243990  700346 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0923 13:31:24.244002  700346 command_runner.go:130] > # the hostname is being managed dynamically.
	I0923 13:31:24.244010  700346 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0923 13:31:24.244017  700346 command_runner.go:130] > # ]
	I0923 13:31:24.244027  700346 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0923 13:31:24.244039  700346 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0923 13:31:24.244051  700346 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0923 13:31:24.244063  700346 command_runner.go:130] > # Each entry in the table should follow the format:
	I0923 13:31:24.244071  700346 command_runner.go:130] > #
	I0923 13:31:24.244078  700346 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0923 13:31:24.244088  700346 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0923 13:31:24.244140  700346 command_runner.go:130] > # runtime_type = "oci"
	I0923 13:31:24.244152  700346 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0923 13:31:24.244159  700346 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0923 13:31:24.244169  700346 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0923 13:31:24.244180  700346 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0923 13:31:24.244190  700346 command_runner.go:130] > # monitor_env = []
	I0923 13:31:24.244198  700346 command_runner.go:130] > # privileged_without_host_devices = false
	I0923 13:31:24.244207  700346 command_runner.go:130] > # allowed_annotations = []
	I0923 13:31:24.244215  700346 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0923 13:31:24.244223  700346 command_runner.go:130] > # Where:
	I0923 13:31:24.244232  700346 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0923 13:31:24.244244  700346 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0923 13:31:24.244260  700346 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0923 13:31:24.244271  700346 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0923 13:31:24.244280  700346 command_runner.go:130] > #   in $PATH.
	I0923 13:31:24.244290  700346 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0923 13:31:24.244300  700346 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0923 13:31:24.244315  700346 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0923 13:31:24.244324  700346 command_runner.go:130] > #   state.
	I0923 13:31:24.244333  700346 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0923 13:31:24.244345  700346 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0923 13:31:24.244360  700346 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0923 13:31:24.244372  700346 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0923 13:31:24.244384  700346 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0923 13:31:24.244397  700346 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0923 13:31:24.244407  700346 command_runner.go:130] > #   The currently recognized values are:
	I0923 13:31:24.244417  700346 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0923 13:31:24.244431  700346 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0923 13:31:24.244440  700346 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0923 13:31:24.244450  700346 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0923 13:31:24.244457  700346 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0923 13:31:24.244465  700346 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0923 13:31:24.244471  700346 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0923 13:31:24.244478  700346 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0923 13:31:24.244489  700346 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0923 13:31:24.244495  700346 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0923 13:31:24.244502  700346 command_runner.go:130] > #   deprecated option "conmon".
	I0923 13:31:24.244511  700346 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0923 13:31:24.244519  700346 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0923 13:31:24.244525  700346 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0923 13:31:24.244532  700346 command_runner.go:130] > #   should be moved to the container's cgroup
	I0923 13:31:24.244538  700346 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0923 13:31:24.244545  700346 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0923 13:31:24.244552  700346 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0923 13:31:24.244561  700346 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0923 13:31:24.244566  700346 command_runner.go:130] > #
	I0923 13:31:24.244571  700346 command_runner.go:130] > # Using the seccomp notifier feature:
	I0923 13:31:24.244574  700346 command_runner.go:130] > #
	I0923 13:31:24.244580  700346 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0923 13:31:24.244588  700346 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0923 13:31:24.244592  700346 command_runner.go:130] > #
	I0923 13:31:24.244602  700346 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0923 13:31:24.244610  700346 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0923 13:31:24.244614  700346 command_runner.go:130] > #
	I0923 13:31:24.244621  700346 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0923 13:31:24.244627  700346 command_runner.go:130] > # feature.
	I0923 13:31:24.244630  700346 command_runner.go:130] > #
	I0923 13:31:24.244636  700346 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0923 13:31:24.244644  700346 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0923 13:31:24.244650  700346 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0923 13:31:24.244658  700346 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0923 13:31:24.244664  700346 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0923 13:31:24.244669  700346 command_runner.go:130] > #
	I0923 13:31:24.244674  700346 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0923 13:31:24.244680  700346 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0923 13:31:24.244685  700346 command_runner.go:130] > #
	I0923 13:31:24.244690  700346 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0923 13:31:24.244697  700346 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0923 13:31:24.244701  700346 command_runner.go:130] > #
	I0923 13:31:24.244707  700346 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0923 13:31:24.244714  700346 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0923 13:31:24.244718  700346 command_runner.go:130] > # limitation.
	I0923 13:31:24.244724  700346 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0923 13:31:24.244728  700346 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0923 13:31:24.244733  700346 command_runner.go:130] > runtime_type = "oci"
	I0923 13:31:24.244737  700346 command_runner.go:130] > runtime_root = "/run/runc"
	I0923 13:31:24.244744  700346 command_runner.go:130] > runtime_config_path = ""
	I0923 13:31:24.244748  700346 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0923 13:31:24.244752  700346 command_runner.go:130] > monitor_cgroup = "pod"
	I0923 13:31:24.244756  700346 command_runner.go:130] > monitor_exec_cgroup = ""
	I0923 13:31:24.244760  700346 command_runner.go:130] > monitor_env = [
	I0923 13:31:24.244765  700346 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0923 13:31:24.244770  700346 command_runner.go:130] > ]
	I0923 13:31:24.244775  700346 command_runner.go:130] > privileged_without_host_devices = false
	I0923 13:31:24.244781  700346 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0923 13:31:24.244787  700346 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0923 13:31:24.244793  700346 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0923 13:31:24.244802  700346 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0923 13:31:24.244811  700346 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0923 13:31:24.244816  700346 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0923 13:31:24.244830  700346 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0923 13:31:24.244839  700346 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0923 13:31:24.244844  700346 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0923 13:31:24.244853  700346 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0923 13:31:24.244859  700346 command_runner.go:130] > # Example:
	I0923 13:31:24.244863  700346 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0923 13:31:24.244868  700346 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0923 13:31:24.244875  700346 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0923 13:31:24.244880  700346 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0923 13:31:24.244883  700346 command_runner.go:130] > # cpuset = 0
	I0923 13:31:24.244889  700346 command_runner.go:130] > # cpushares = "0-1"
	I0923 13:31:24.244892  700346 command_runner.go:130] > # Where:
	I0923 13:31:24.244901  700346 command_runner.go:130] > # The workload name is workload-type.
	I0923 13:31:24.244916  700346 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0923 13:31:24.244927  700346 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0923 13:31:24.244936  700346 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0923 13:31:24.244949  700346 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0923 13:31:24.244961  700346 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0923 13:31:24.244970  700346 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0923 13:31:24.244982  700346 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0923 13:31:24.244991  700346 command_runner.go:130] > # Default value is set to true
	I0923 13:31:24.244998  700346 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0923 13:31:24.245008  700346 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0923 13:31:24.245017  700346 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0923 13:31:24.245021  700346 command_runner.go:130] > # Default value is set to 'false'
	I0923 13:31:24.245026  700346 command_runner.go:130] > # disable_hostport_mapping = false
	I0923 13:31:24.245033  700346 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0923 13:31:24.245038  700346 command_runner.go:130] > #
	I0923 13:31:24.245044  700346 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0923 13:31:24.245050  700346 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0923 13:31:24.245059  700346 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0923 13:31:24.245066  700346 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0923 13:31:24.245071  700346 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0923 13:31:24.245074  700346 command_runner.go:130] > [crio.image]
	I0923 13:31:24.245080  700346 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0923 13:31:24.245083  700346 command_runner.go:130] > # default_transport = "docker://"
	I0923 13:31:24.245092  700346 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0923 13:31:24.245098  700346 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0923 13:31:24.245102  700346 command_runner.go:130] > # global_auth_file = ""
	I0923 13:31:24.245106  700346 command_runner.go:130] > # The image used to instantiate infra containers.
	I0923 13:31:24.245111  700346 command_runner.go:130] > # This option supports live configuration reload.
	I0923 13:31:24.245115  700346 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0923 13:31:24.245121  700346 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0923 13:31:24.245126  700346 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0923 13:31:24.245131  700346 command_runner.go:130] > # This option supports live configuration reload.
	I0923 13:31:24.245135  700346 command_runner.go:130] > # pause_image_auth_file = ""
	I0923 13:31:24.245140  700346 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0923 13:31:24.245146  700346 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0923 13:31:24.245152  700346 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0923 13:31:24.245157  700346 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0923 13:31:24.245161  700346 command_runner.go:130] > # pause_command = "/pause"
	I0923 13:31:24.245166  700346 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0923 13:31:24.245173  700346 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0923 13:31:24.245178  700346 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0923 13:31:24.245185  700346 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0923 13:31:24.245191  700346 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0923 13:31:24.245197  700346 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0923 13:31:24.245200  700346 command_runner.go:130] > # pinned_images = [
	I0923 13:31:24.245203  700346 command_runner.go:130] > # ]
	I0923 13:31:24.245209  700346 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0923 13:31:24.245214  700346 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0923 13:31:24.245220  700346 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0923 13:31:24.245225  700346 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0923 13:31:24.245232  700346 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0923 13:31:24.245237  700346 command_runner.go:130] > # signature_policy = ""
	I0923 13:31:24.245242  700346 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0923 13:31:24.245250  700346 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0923 13:31:24.245256  700346 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0923 13:31:24.245262  700346 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0923 13:31:24.245267  700346 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0923 13:31:24.245272  700346 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0923 13:31:24.245278  700346 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0923 13:31:24.245286  700346 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0923 13:31:24.245293  700346 command_runner.go:130] > # changing them here.
	I0923 13:31:24.245297  700346 command_runner.go:130] > # insecure_registries = [
	I0923 13:31:24.245300  700346 command_runner.go:130] > # ]
	I0923 13:31:24.245309  700346 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0923 13:31:24.245313  700346 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0923 13:31:24.245318  700346 command_runner.go:130] > # image_volumes = "mkdir"
	I0923 13:31:24.245324  700346 command_runner.go:130] > # Temporary directory to use for storing big files
	I0923 13:31:24.245330  700346 command_runner.go:130] > # big_files_temporary_dir = ""
	I0923 13:31:24.245336  700346 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0923 13:31:24.245342  700346 command_runner.go:130] > # CNI plugins.
	I0923 13:31:24.245346  700346 command_runner.go:130] > [crio.network]
	I0923 13:31:24.245353  700346 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0923 13:31:24.245361  700346 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0923 13:31:24.245365  700346 command_runner.go:130] > # cni_default_network = ""
	I0923 13:31:24.245370  700346 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0923 13:31:24.245375  700346 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0923 13:31:24.245382  700346 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0923 13:31:24.245388  700346 command_runner.go:130] > # plugin_dirs = [
	I0923 13:31:24.245391  700346 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0923 13:31:24.245396  700346 command_runner.go:130] > # ]
	I0923 13:31:24.245402  700346 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0923 13:31:24.245407  700346 command_runner.go:130] > [crio.metrics]
	I0923 13:31:24.245412  700346 command_runner.go:130] > # Globally enable or disable metrics support.
	I0923 13:31:24.245418  700346 command_runner.go:130] > enable_metrics = true
	I0923 13:31:24.245423  700346 command_runner.go:130] > # Specify enabled metrics collectors.
	I0923 13:31:24.245429  700346 command_runner.go:130] > # Per default all metrics are enabled.
	I0923 13:31:24.245435  700346 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0923 13:31:24.245443  700346 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0923 13:31:24.245448  700346 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0923 13:31:24.245454  700346 command_runner.go:130] > # metrics_collectors = [
	I0923 13:31:24.245458  700346 command_runner.go:130] > # 	"operations",
	I0923 13:31:24.245462  700346 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0923 13:31:24.245467  700346 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0923 13:31:24.245471  700346 command_runner.go:130] > # 	"operations_errors",
	I0923 13:31:24.245475  700346 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0923 13:31:24.245484  700346 command_runner.go:130] > # 	"image_pulls_by_name",
	I0923 13:31:24.245490  700346 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0923 13:31:24.245494  700346 command_runner.go:130] > # 	"image_pulls_failures",
	I0923 13:31:24.245498  700346 command_runner.go:130] > # 	"image_pulls_successes",
	I0923 13:31:24.245502  700346 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0923 13:31:24.245506  700346 command_runner.go:130] > # 	"image_layer_reuse",
	I0923 13:31:24.245510  700346 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0923 13:31:24.245517  700346 command_runner.go:130] > # 	"containers_oom_total",
	I0923 13:31:24.245522  700346 command_runner.go:130] > # 	"containers_oom",
	I0923 13:31:24.245526  700346 command_runner.go:130] > # 	"processes_defunct",
	I0923 13:31:24.245530  700346 command_runner.go:130] > # 	"operations_total",
	I0923 13:31:24.245534  700346 command_runner.go:130] > # 	"operations_latency_seconds",
	I0923 13:31:24.245538  700346 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0923 13:31:24.245542  700346 command_runner.go:130] > # 	"operations_errors_total",
	I0923 13:31:24.245546  700346 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0923 13:31:24.245551  700346 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0923 13:31:24.245557  700346 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0923 13:31:24.245561  700346 command_runner.go:130] > # 	"image_pulls_success_total",
	I0923 13:31:24.245565  700346 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0923 13:31:24.245569  700346 command_runner.go:130] > # 	"containers_oom_count_total",
	I0923 13:31:24.245574  700346 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0923 13:31:24.245581  700346 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0923 13:31:24.245584  700346 command_runner.go:130] > # ]
	I0923 13:31:24.245589  700346 command_runner.go:130] > # The port on which the metrics server will listen.
	I0923 13:31:24.245595  700346 command_runner.go:130] > # metrics_port = 9090
	I0923 13:31:24.245600  700346 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0923 13:31:24.245606  700346 command_runner.go:130] > # metrics_socket = ""
	I0923 13:31:24.245611  700346 command_runner.go:130] > # The certificate for the secure metrics server.
	I0923 13:31:24.245618  700346 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0923 13:31:24.245624  700346 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0923 13:31:24.245630  700346 command_runner.go:130] > # certificate on any modification event.
	I0923 13:31:24.245634  700346 command_runner.go:130] > # metrics_cert = ""
	I0923 13:31:24.245641  700346 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0923 13:31:24.245647  700346 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0923 13:31:24.245651  700346 command_runner.go:130] > # metrics_key = ""
	I0923 13:31:24.245657  700346 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0923 13:31:24.245661  700346 command_runner.go:130] > [crio.tracing]
	I0923 13:31:24.245666  700346 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0923 13:31:24.245671  700346 command_runner.go:130] > # enable_tracing = false
	I0923 13:31:24.245676  700346 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0923 13:31:24.245682  700346 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0923 13:31:24.245688  700346 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0923 13:31:24.245693  700346 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0923 13:31:24.245699  700346 command_runner.go:130] > # CRI-O NRI configuration.
	I0923 13:31:24.245703  700346 command_runner.go:130] > [crio.nri]
	I0923 13:31:24.245707  700346 command_runner.go:130] > # Globally enable or disable NRI.
	I0923 13:31:24.245711  700346 command_runner.go:130] > # enable_nri = false
	I0923 13:31:24.245715  700346 command_runner.go:130] > # NRI socket to listen on.
	I0923 13:31:24.245719  700346 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0923 13:31:24.245723  700346 command_runner.go:130] > # NRI plugin directory to use.
	I0923 13:31:24.245728  700346 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0923 13:31:24.245735  700346 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0923 13:31:24.245742  700346 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0923 13:31:24.245748  700346 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0923 13:31:24.245755  700346 command_runner.go:130] > # nri_disable_connections = false
	I0923 13:31:24.245760  700346 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0923 13:31:24.245766  700346 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0923 13:31:24.245771  700346 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0923 13:31:24.245775  700346 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0923 13:31:24.245780  700346 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0923 13:31:24.245786  700346 command_runner.go:130] > [crio.stats]
	I0923 13:31:24.245791  700346 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0923 13:31:24.245798  700346 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0923 13:31:24.245802  700346 command_runner.go:130] > # stats_collection_period = 0
	I0923 13:31:24.246277  700346 command_runner.go:130] ! time="2024-09-23 13:31:24.206836088Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0923 13:31:24.246306  700346 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0923 13:31:24.246411  700346 cni.go:84] Creating CNI manager for ""
	I0923 13:31:24.246428  700346 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0923 13:31:24.246440  700346 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 13:31:24.246476  700346 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.168 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-851928 NodeName:multinode-851928 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 13:31:24.246628  700346 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-851928"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.168
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.168"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 13:31:24.246695  700346 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 13:31:24.257427  700346 command_runner.go:130] > kubeadm
	I0923 13:31:24.257452  700346 command_runner.go:130] > kubectl
	I0923 13:31:24.257457  700346 command_runner.go:130] > kubelet
	I0923 13:31:24.257482  700346 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 13:31:24.257552  700346 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 13:31:24.267759  700346 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0923 13:31:24.284841  700346 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 13:31:24.301903  700346 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0923 13:31:24.318354  700346 ssh_runner.go:195] Run: grep 192.168.39.168	control-plane.minikube.internal$ /etc/hosts
	I0923 13:31:24.322165  700346 command_runner.go:130] > 192.168.39.168	control-plane.minikube.internal
	I0923 13:31:24.322268  700346 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:31:24.464275  700346 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:31:24.479388  700346 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/multinode-851928 for IP: 192.168.39.168
	I0923 13:31:24.479419  700346 certs.go:194] generating shared ca certs ...
	I0923 13:31:24.479438  700346 certs.go:226] acquiring lock for ca certs: {Name:mk5f47b34d40554f07f6507fea971236e4735d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:31:24.479622  700346 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key
	I0923 13:31:24.479659  700346 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key
	I0923 13:31:24.479668  700346 certs.go:256] generating profile certs ...
	I0923 13:31:24.479763  700346 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/multinode-851928/client.key
	I0923 13:31:24.479835  700346 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/multinode-851928/apiserver.key.897c86c7
	I0923 13:31:24.479869  700346 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/multinode-851928/proxy-client.key
	I0923 13:31:24.479881  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 13:31:24.479899  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 13:31:24.479912  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 13:31:24.479922  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 13:31:24.479934  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/multinode-851928/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 13:31:24.479947  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/multinode-851928/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 13:31:24.479959  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/multinode-851928/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 13:31:24.479970  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/multinode-851928/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 13:31:24.480019  700346 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem (1338 bytes)
	W0923 13:31:24.480047  700346 certs.go:480] ignoring /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447_empty.pem, impossibly tiny 0 bytes
	I0923 13:31:24.480056  700346 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 13:31:24.480077  700346 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem (1082 bytes)
	I0923 13:31:24.480101  700346 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem (1123 bytes)
	I0923 13:31:24.480124  700346 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem (1675 bytes)
	I0923 13:31:24.480161  700346 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 13:31:24.480191  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:31:24.480203  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem -> /usr/share/ca-certificates/669447.pem
	I0923 13:31:24.480213  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> /usr/share/ca-certificates/6694472.pem
	I0923 13:31:24.480839  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 13:31:24.506795  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 13:31:24.532921  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 13:31:24.558600  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 13:31:24.584744  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/multinode-851928/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0923 13:31:24.612111  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/multinode-851928/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 13:31:24.636842  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/multinode-851928/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 13:31:24.660958  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/multinode-851928/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 13:31:24.685304  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 13:31:24.710049  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem --> /usr/share/ca-certificates/669447.pem (1338 bytes)
	I0923 13:31:24.734431  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /usr/share/ca-certificates/6694472.pem (1708 bytes)
	I0923 13:31:24.760047  700346 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 13:31:24.777922  700346 ssh_runner.go:195] Run: openssl version
	I0923 13:31:24.783832  700346 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0923 13:31:24.783927  700346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 13:31:24.794949  700346 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:31:24.799148  700346 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 23 12:28 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:31:24.799236  700346 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 12:28 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:31:24.799337  700346 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:31:24.804750  700346 command_runner.go:130] > b5213941
	I0923 13:31:24.804850  700346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 13:31:24.814371  700346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669447.pem && ln -fs /usr/share/ca-certificates/669447.pem /etc/ssl/certs/669447.pem"
	I0923 13:31:24.825128  700346 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669447.pem
	I0923 13:31:24.829436  700346 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 23 12:47 /usr/share/ca-certificates/669447.pem
	I0923 13:31:24.829475  700346 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 12:47 /usr/share/ca-certificates/669447.pem
	I0923 13:31:24.829516  700346 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669447.pem
	I0923 13:31:24.834928  700346 command_runner.go:130] > 51391683
	I0923 13:31:24.835037  700346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/669447.pem /etc/ssl/certs/51391683.0"
	I0923 13:31:24.844672  700346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6694472.pem && ln -fs /usr/share/ca-certificates/6694472.pem /etc/ssl/certs/6694472.pem"
	I0923 13:31:24.855224  700346 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6694472.pem
	I0923 13:31:24.859605  700346 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 23 12:47 /usr/share/ca-certificates/6694472.pem
	I0923 13:31:24.859728  700346 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 12:47 /usr/share/ca-certificates/6694472.pem
	I0923 13:31:24.859785  700346 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6694472.pem
	I0923 13:31:24.865348  700346 command_runner.go:130] > 3ec20f2e
	I0923 13:31:24.865451  700346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6694472.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 13:31:24.882267  700346 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 13:31:24.891053  700346 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 13:31:24.891096  700346 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0923 13:31:24.891105  700346 command_runner.go:130] > Device: 253,1	Inode: 6289960     Links: 1
	I0923 13:31:24.891114  700346 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0923 13:31:24.891123  700346 command_runner.go:130] > Access: 2024-09-23 13:24:38.817981363 +0000
	I0923 13:31:24.891129  700346 command_runner.go:130] > Modify: 2024-09-23 13:24:38.817981363 +0000
	I0923 13:31:24.891136  700346 command_runner.go:130] > Change: 2024-09-23 13:24:38.817981363 +0000
	I0923 13:31:24.891143  700346 command_runner.go:130] >  Birth: 2024-09-23 13:24:38.817981363 +0000
	I0923 13:31:24.891450  700346 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 13:31:24.901001  700346 command_runner.go:130] > Certificate will not expire
	I0923 13:31:24.901155  700346 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 13:31:24.906807  700346 command_runner.go:130] > Certificate will not expire
	I0923 13:31:24.906903  700346 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 13:31:24.912729  700346 command_runner.go:130] > Certificate will not expire
	I0923 13:31:24.912843  700346 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 13:31:24.918576  700346 command_runner.go:130] > Certificate will not expire
	I0923 13:31:24.918676  700346 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 13:31:24.924441  700346 command_runner.go:130] > Certificate will not expire
	I0923 13:31:24.924587  700346 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 13:31:24.930113  700346 command_runner.go:130] > Certificate will not expire
	I0923 13:31:24.930180  700346 kubeadm.go:392] StartCluster: {Name:multinode-851928 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-851928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.25 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:31:24.930331  700346 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 13:31:24.930398  700346 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 13:31:24.968169  700346 command_runner.go:130] > f3ee062c82e9627964304fba440efa3b6e5b3d497a3f92f9e9222fc249896983
	I0923 13:31:24.968208  700346 command_runner.go:130] > 1b801f8d1903dcc317703d0c9c8339254a8589f5c6b3f839975618b27f22cae2
	I0923 13:31:24.968217  700346 command_runner.go:130] > 56cde957d502ee7288dee888b768f3cff4ccf17d74731851e7bbb81a0e5a5d7f
	I0923 13:31:24.968228  700346 command_runner.go:130] > 618cba5848a3cc3bd892bbb9e1cade2bdfa5035a1d7614c5a697351c7cf6b194
	I0923 13:31:24.968237  700346 command_runner.go:130] > 0f70273abbde20fa97dc324c2b48d24df3559f02d2199042fbb7b615ac8c379c
	I0923 13:31:24.968246  700346 command_runner.go:130] > eec587e30a7bb93e57e0360e1ed4662c79a8eced62814cb35146c0dba40123e4
	I0923 13:31:24.968257  700346 command_runner.go:130] > 306c5ac12948941777bcc8958b4a6ed737c7f0b3c6501816a604e4fb0da5fe16
	I0923 13:31:24.968266  700346 command_runner.go:130] > 692d9ab32ac920c03e99a206737a75e8f420c2aa3047b251a9e76a8feefa6d7c
	I0923 13:31:24.968300  700346 cri.go:89] found id: "f3ee062c82e9627964304fba440efa3b6e5b3d497a3f92f9e9222fc249896983"
	I0923 13:31:24.968309  700346 cri.go:89] found id: "1b801f8d1903dcc317703d0c9c8339254a8589f5c6b3f839975618b27f22cae2"
	I0923 13:31:24.968313  700346 cri.go:89] found id: "56cde957d502ee7288dee888b768f3cff4ccf17d74731851e7bbb81a0e5a5d7f"
	I0923 13:31:24.968316  700346 cri.go:89] found id: "618cba5848a3cc3bd892bbb9e1cade2bdfa5035a1d7614c5a697351c7cf6b194"
	I0923 13:31:24.968319  700346 cri.go:89] found id: "0f70273abbde20fa97dc324c2b48d24df3559f02d2199042fbb7b615ac8c379c"
	I0923 13:31:24.968325  700346 cri.go:89] found id: "eec587e30a7bb93e57e0360e1ed4662c79a8eced62814cb35146c0dba40123e4"
	I0923 13:31:24.968328  700346 cri.go:89] found id: "306c5ac12948941777bcc8958b4a6ed737c7f0b3c6501816a604e4fb0da5fe16"
	I0923 13:31:24.968330  700346 cri.go:89] found id: "692d9ab32ac920c03e99a206737a75e8f420c2aa3047b251a9e76a8feefa6d7c"
	I0923 13:31:24.968333  700346 cri.go:89] found id: ""
	I0923 13:31:24.968378  700346 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 23 13:33:10 multinode-851928 crio[2731]: time="2024-09-23 13:33:10.535968114Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098390535944970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=826c299f-2afb-4bc3-9d5e-eec114b1be52 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:33:10 multinode-851928 crio[2731]: time="2024-09-23 13:33:10.536486225Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=305acc9f-1fee-44b6-ba5a-f286020c3b2f name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:33:10 multinode-851928 crio[2731]: time="2024-09-23 13:33:10.536553030Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=305acc9f-1fee-44b6-ba5a-f286020c3b2f name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:33:10 multinode-851928 crio[2731]: time="2024-09-23 13:33:10.536951358Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:359df88602eac689ea6f8a2d7a9975a0547e1bed39f60881b82a92736e6cb009,PodSandboxId:f6165aa9f5232cfa3983ad7ba5f2b01443acb0172047f93b410cfee89ed7e6c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727098325018657974,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gl4bk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cd876f4-a4fd-466e-a84c-151e00179085,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99756f6019b4d7b958225ab6f9327a6b8c9203fbf3a2d830b5062cdd86647ba,PodSandboxId:ca0338566743328d45d72d7892aec5e33c54ac1153f64b0d8b1e540310a4ac9d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727098291493819395,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c8x2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11cb938f-96d8-4fc5-bec8-345a527fa45c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654185fbfcb03063b85dbe6773f55ad2831999a20283aff14d6229a7ce62f672,PodSandboxId:88dbc26750f98450a4227b6447782034ac35694dfbd57bd8a44b24bc4e3b3a16,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727098291533214262,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vwqlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 836d6606-1f23-4ad0-920e-4a58493501d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db6c0468a85317df9394dd318f8bc43400bc0ce88d7077411f6b535fd107e48,PodSandboxId:7a8400923cb977887a353ac861412e9718431dd173807754346e28fbbf73f550,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727098291368736908,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s52gf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc994d5-3cb5-4463-966b-b32c85869126,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de6979fac3f5b0c2eaef8551a532606c73e38cd83f774096644077a457b0c1ff,PodSandboxId:a4037f5308176d337b2deca77b44b8af2c73565b79e10ee4714a1a5d145710ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727098291320194016,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebe9874e-9033-47fb-a3a4-bce0f18c688e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ac0945c09a0432f1c1e1b73c691250779a94d3a34f10a893924ea884b2b3d0,PodSandboxId:d71d835adcd1d15844807d2a291e6fef6cb20e1b18b5172ee25e4cbbbac47f3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727098287473406807,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02835a6cac33d484359446ef024faa27,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24ae473221e9e088de9eef2ef703d5d3f0766f4014f7fbc1d037c679d3e2baac,PodSandboxId:347f0941c6625f27283194de8d4fd32006b3d380888a85678b2f8a7063e5aa4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727098287475020753,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 878441eec194631b842ef5e820e0ff09,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53e3cea42ba31eed103d200b57d82aaa7e8e100a2266c766104a3a52b620a95f,PodSandboxId:828939191cba88d1217296fdda58434ecfb1563b0377c4b0b25bbce93519acfb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727098287437499929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 356d5ed4adf1a57c3ad8edf8e104c7f7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e4fac777cc7ccf86d00ee7e26a7e351940e38e66d7ed676e56f1c859842e6bf,PodSandboxId:055632a81dcd102f90add8d8a980bfe8dc44e947f121f368e0a4854dc05c8b58,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727098287404054814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f132eb7802a4272b67ff520aaf3e0c91,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c269baf9d7255a9bfe6cfce34bb0d26bc2c217b16c3c6150bd0199bb43fe0fd,PodSandboxId:68ce621d149dbab89bbf1d40250aeafe88c567a5cda763b753d58dbe7b5983bb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727097962330352006,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gl4bk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cd876f4-a4fd-466e-a84c-151e00179085,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ee062c82e9627964304fba440efa3b6e5b3d497a3f92f9e9222fc249896983,PodSandboxId:7e8debb3a7a32a909f49fdbb6b4e160cf52736f78ed06f1b8109978ac36d9d4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727097906078294996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vwqlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 836d6606-1f23-4ad0-920e-4a58493501d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b801f8d1903dcc317703d0c9c8339254a8589f5c6b3f839975618b27f22cae2,PodSandboxId:93eb120762a7ecc71cd179f3bdad4a6269c404b8b84f7807e052cc87f2cbe855,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727097906023278582,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: ebe9874e-9033-47fb-a3a4-bce0f18c688e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56cde957d502ee7288dee888b768f3cff4ccf17d74731851e7bbb81a0e5a5d7f,PodSandboxId:6d7dd123595bb380fa71402185f5498f549c3dd5b06c7706f0731b2908d24371,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727097894067120166,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c8x2d,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 11cb938f-96d8-4fc5-bec8-345a527fa45c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618cba5848a3cc3bd892bbb9e1cade2bdfa5035a1d7614c5a697351c7cf6b194,PodSandboxId:799bf1c098365a9033865dcd72e0da7b176b1f4120dc73536f70e4e1709169ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727097893819208405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s52gf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc994d5-3cb5-4463-966b
-b32c85869126,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f70273abbde20fa97dc324c2b48d24df3559f02d2199042fbb7b615ac8c379c,PodSandboxId:ce7bd0baa5c47857eca3027f914a72474f6728c7a0efb049b7d513de9c55b8f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727097883225903642,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02835a6cac33d484359446ef024faa27,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306c5ac12948941777bcc8958b4a6ed737c7f0b3c6501816a604e4fb0da5fe16,PodSandboxId:8df9ec5d7f3b25e1c83cdfa42036161a0e93ee475c14e4a554ada703c1ce083a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727097883190370572,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 878441eec194631b842ef5e820e0ff09,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec587e30a7bb93e57e0360e1ed4662c79a8eced62814cb35146c0dba40123e4,PodSandboxId:25e766a6006484584935633eff06c186880f1751c10365776bd6e07ac9e5a007,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727097883221568719,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 356d5ed4adf1a57c3ad8edf8e104c7f7,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692d9ab32ac920c03e99a206737a75e8f420c2aa3047b251a9e76a8feefa6d7c,PodSandboxId:6f6f297d2bf0b1ed1d4a99885a76e9163ea663e071b08c4b311ac857f507925f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727097883164371598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f132eb7802a4272b67ff520aaf3e0c91,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=305acc9f-1fee-44b6-ba5a-f286020c3b2f name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:33:10 multinode-851928 crio[2731]: time="2024-09-23 13:33:10.577271216Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=309e0f6d-9e14-4fa5-ba92-886481c9cef7 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:33:10 multinode-851928 crio[2731]: time="2024-09-23 13:33:10.577344726Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=309e0f6d-9e14-4fa5-ba92-886481c9cef7 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:33:10 multinode-851928 crio[2731]: time="2024-09-23 13:33:10.579112024Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4701c53f-dc0b-4997-9373-85d283afcfdf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:33:10 multinode-851928 crio[2731]: time="2024-09-23 13:33:10.579535945Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098390579504259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4701c53f-dc0b-4997-9373-85d283afcfdf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:33:10 multinode-851928 crio[2731]: time="2024-09-23 13:33:10.580298034Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=183cecef-50a2-451b-8c0b-5491fecbce8d name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:33:10 multinode-851928 crio[2731]: time="2024-09-23 13:33:10.580363553Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=183cecef-50a2-451b-8c0b-5491fecbce8d name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:33:10 multinode-851928 crio[2731]: time="2024-09-23 13:33:10.580904440Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:359df88602eac689ea6f8a2d7a9975a0547e1bed39f60881b82a92736e6cb009,PodSandboxId:f6165aa9f5232cfa3983ad7ba5f2b01443acb0172047f93b410cfee89ed7e6c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727098325018657974,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gl4bk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cd876f4-a4fd-466e-a84c-151e00179085,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99756f6019b4d7b958225ab6f9327a6b8c9203fbf3a2d830b5062cdd86647ba,PodSandboxId:ca0338566743328d45d72d7892aec5e33c54ac1153f64b0d8b1e540310a4ac9d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727098291493819395,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c8x2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11cb938f-96d8-4fc5-bec8-345a527fa45c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654185fbfcb03063b85dbe6773f55ad2831999a20283aff14d6229a7ce62f672,PodSandboxId:88dbc26750f98450a4227b6447782034ac35694dfbd57bd8a44b24bc4e3b3a16,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727098291533214262,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vwqlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 836d6606-1f23-4ad0-920e-4a58493501d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db6c0468a85317df9394dd318f8bc43400bc0ce88d7077411f6b535fd107e48,PodSandboxId:7a8400923cb977887a353ac861412e9718431dd173807754346e28fbbf73f550,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727098291368736908,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s52gf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc994d5-3cb5-4463-966b-b32c85869126,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de6979fac3f5b0c2eaef8551a532606c73e38cd83f774096644077a457b0c1ff,PodSandboxId:a4037f5308176d337b2deca77b44b8af2c73565b79e10ee4714a1a5d145710ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727098291320194016,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebe9874e-9033-47fb-a3a4-bce0f18c688e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ac0945c09a0432f1c1e1b73c691250779a94d3a34f10a893924ea884b2b3d0,PodSandboxId:d71d835adcd1d15844807d2a291e6fef6cb20e1b18b5172ee25e4cbbbac47f3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727098287473406807,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02835a6cac33d484359446ef024faa27,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24ae473221e9e088de9eef2ef703d5d3f0766f4014f7fbc1d037c679d3e2baac,PodSandboxId:347f0941c6625f27283194de8d4fd32006b3d380888a85678b2f8a7063e5aa4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727098287475020753,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 878441eec194631b842ef5e820e0ff09,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53e3cea42ba31eed103d200b57d82aaa7e8e100a2266c766104a3a52b620a95f,PodSandboxId:828939191cba88d1217296fdda58434ecfb1563b0377c4b0b25bbce93519acfb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727098287437499929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 356d5ed4adf1a57c3ad8edf8e104c7f7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e4fac777cc7ccf86d00ee7e26a7e351940e38e66d7ed676e56f1c859842e6bf,PodSandboxId:055632a81dcd102f90add8d8a980bfe8dc44e947f121f368e0a4854dc05c8b58,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727098287404054814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f132eb7802a4272b67ff520aaf3e0c91,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c269baf9d7255a9bfe6cfce34bb0d26bc2c217b16c3c6150bd0199bb43fe0fd,PodSandboxId:68ce621d149dbab89bbf1d40250aeafe88c567a5cda763b753d58dbe7b5983bb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727097962330352006,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gl4bk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cd876f4-a4fd-466e-a84c-151e00179085,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ee062c82e9627964304fba440efa3b6e5b3d497a3f92f9e9222fc249896983,PodSandboxId:7e8debb3a7a32a909f49fdbb6b4e160cf52736f78ed06f1b8109978ac36d9d4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727097906078294996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vwqlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 836d6606-1f23-4ad0-920e-4a58493501d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b801f8d1903dcc317703d0c9c8339254a8589f5c6b3f839975618b27f22cae2,PodSandboxId:93eb120762a7ecc71cd179f3bdad4a6269c404b8b84f7807e052cc87f2cbe855,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727097906023278582,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: ebe9874e-9033-47fb-a3a4-bce0f18c688e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56cde957d502ee7288dee888b768f3cff4ccf17d74731851e7bbb81a0e5a5d7f,PodSandboxId:6d7dd123595bb380fa71402185f5498f549c3dd5b06c7706f0731b2908d24371,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727097894067120166,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c8x2d,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 11cb938f-96d8-4fc5-bec8-345a527fa45c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618cba5848a3cc3bd892bbb9e1cade2bdfa5035a1d7614c5a697351c7cf6b194,PodSandboxId:799bf1c098365a9033865dcd72e0da7b176b1f4120dc73536f70e4e1709169ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727097893819208405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s52gf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc994d5-3cb5-4463-966b
-b32c85869126,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f70273abbde20fa97dc324c2b48d24df3559f02d2199042fbb7b615ac8c379c,PodSandboxId:ce7bd0baa5c47857eca3027f914a72474f6728c7a0efb049b7d513de9c55b8f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727097883225903642,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02835a6cac33d484359446ef024faa27,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306c5ac12948941777bcc8958b4a6ed737c7f0b3c6501816a604e4fb0da5fe16,PodSandboxId:8df9ec5d7f3b25e1c83cdfa42036161a0e93ee475c14e4a554ada703c1ce083a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727097883190370572,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 878441eec194631b842ef5e820e0ff09,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec587e30a7bb93e57e0360e1ed4662c79a8eced62814cb35146c0dba40123e4,PodSandboxId:25e766a6006484584935633eff06c186880f1751c10365776bd6e07ac9e5a007,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727097883221568719,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 356d5ed4adf1a57c3ad8edf8e104c7f7,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692d9ab32ac920c03e99a206737a75e8f420c2aa3047b251a9e76a8feefa6d7c,PodSandboxId:6f6f297d2bf0b1ed1d4a99885a76e9163ea663e071b08c4b311ac857f507925f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727097883164371598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f132eb7802a4272b67ff520aaf3e0c91,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=183cecef-50a2-451b-8c0b-5491fecbce8d name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:33:10 multinode-851928 crio[2731]: time="2024-09-23 13:33:10.622453834Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=62a90a84-b89d-42fd-9a6f-4d44fcf082d1 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:33:10 multinode-851928 crio[2731]: time="2024-09-23 13:33:10.622535047Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=62a90a84-b89d-42fd-9a6f-4d44fcf082d1 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:33:10 multinode-851928 crio[2731]: time="2024-09-23 13:33:10.623715646Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=099f0a7b-f039-4464-9333-178e59ef1b82 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:33:10 multinode-851928 crio[2731]: time="2024-09-23 13:33:10.624128469Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098390624102807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=099f0a7b-f039-4464-9333-178e59ef1b82 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:33:10 multinode-851928 crio[2731]: time="2024-09-23 13:33:10.624685369Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e41f18e1-8052-42e1-abeb-445fd63a254c name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:33:10 multinode-851928 crio[2731]: time="2024-09-23 13:33:10.624746647Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e41f18e1-8052-42e1-abeb-445fd63a254c name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:33:10 multinode-851928 crio[2731]: time="2024-09-23 13:33:10.625123785Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:359df88602eac689ea6f8a2d7a9975a0547e1bed39f60881b82a92736e6cb009,PodSandboxId:f6165aa9f5232cfa3983ad7ba5f2b01443acb0172047f93b410cfee89ed7e6c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727098325018657974,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gl4bk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cd876f4-a4fd-466e-a84c-151e00179085,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99756f6019b4d7b958225ab6f9327a6b8c9203fbf3a2d830b5062cdd86647ba,PodSandboxId:ca0338566743328d45d72d7892aec5e33c54ac1153f64b0d8b1e540310a4ac9d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727098291493819395,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c8x2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11cb938f-96d8-4fc5-bec8-345a527fa45c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654185fbfcb03063b85dbe6773f55ad2831999a20283aff14d6229a7ce62f672,PodSandboxId:88dbc26750f98450a4227b6447782034ac35694dfbd57bd8a44b24bc4e3b3a16,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727098291533214262,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vwqlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 836d6606-1f23-4ad0-920e-4a58493501d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db6c0468a85317df9394dd318f8bc43400bc0ce88d7077411f6b535fd107e48,PodSandboxId:7a8400923cb977887a353ac861412e9718431dd173807754346e28fbbf73f550,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727098291368736908,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s52gf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc994d5-3cb5-4463-966b-b32c85869126,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de6979fac3f5b0c2eaef8551a532606c73e38cd83f774096644077a457b0c1ff,PodSandboxId:a4037f5308176d337b2deca77b44b8af2c73565b79e10ee4714a1a5d145710ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727098291320194016,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebe9874e-9033-47fb-a3a4-bce0f18c688e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ac0945c09a0432f1c1e1b73c691250779a94d3a34f10a893924ea884b2b3d0,PodSandboxId:d71d835adcd1d15844807d2a291e6fef6cb20e1b18b5172ee25e4cbbbac47f3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727098287473406807,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02835a6cac33d484359446ef024faa27,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24ae473221e9e088de9eef2ef703d5d3f0766f4014f7fbc1d037c679d3e2baac,PodSandboxId:347f0941c6625f27283194de8d4fd32006b3d380888a85678b2f8a7063e5aa4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727098287475020753,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 878441eec194631b842ef5e820e0ff09,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53e3cea42ba31eed103d200b57d82aaa7e8e100a2266c766104a3a52b620a95f,PodSandboxId:828939191cba88d1217296fdda58434ecfb1563b0377c4b0b25bbce93519acfb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727098287437499929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 356d5ed4adf1a57c3ad8edf8e104c7f7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e4fac777cc7ccf86d00ee7e26a7e351940e38e66d7ed676e56f1c859842e6bf,PodSandboxId:055632a81dcd102f90add8d8a980bfe8dc44e947f121f368e0a4854dc05c8b58,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727098287404054814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f132eb7802a4272b67ff520aaf3e0c91,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c269baf9d7255a9bfe6cfce34bb0d26bc2c217b16c3c6150bd0199bb43fe0fd,PodSandboxId:68ce621d149dbab89bbf1d40250aeafe88c567a5cda763b753d58dbe7b5983bb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727097962330352006,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gl4bk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cd876f4-a4fd-466e-a84c-151e00179085,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ee062c82e9627964304fba440efa3b6e5b3d497a3f92f9e9222fc249896983,PodSandboxId:7e8debb3a7a32a909f49fdbb6b4e160cf52736f78ed06f1b8109978ac36d9d4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727097906078294996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vwqlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 836d6606-1f23-4ad0-920e-4a58493501d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b801f8d1903dcc317703d0c9c8339254a8589f5c6b3f839975618b27f22cae2,PodSandboxId:93eb120762a7ecc71cd179f3bdad4a6269c404b8b84f7807e052cc87f2cbe855,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727097906023278582,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: ebe9874e-9033-47fb-a3a4-bce0f18c688e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56cde957d502ee7288dee888b768f3cff4ccf17d74731851e7bbb81a0e5a5d7f,PodSandboxId:6d7dd123595bb380fa71402185f5498f549c3dd5b06c7706f0731b2908d24371,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727097894067120166,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c8x2d,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 11cb938f-96d8-4fc5-bec8-345a527fa45c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618cba5848a3cc3bd892bbb9e1cade2bdfa5035a1d7614c5a697351c7cf6b194,PodSandboxId:799bf1c098365a9033865dcd72e0da7b176b1f4120dc73536f70e4e1709169ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727097893819208405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s52gf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc994d5-3cb5-4463-966b
-b32c85869126,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f70273abbde20fa97dc324c2b48d24df3559f02d2199042fbb7b615ac8c379c,PodSandboxId:ce7bd0baa5c47857eca3027f914a72474f6728c7a0efb049b7d513de9c55b8f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727097883225903642,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02835a6cac33d484359446ef024faa27,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306c5ac12948941777bcc8958b4a6ed737c7f0b3c6501816a604e4fb0da5fe16,PodSandboxId:8df9ec5d7f3b25e1c83cdfa42036161a0e93ee475c14e4a554ada703c1ce083a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727097883190370572,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 878441eec194631b842ef5e820e0ff09,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec587e30a7bb93e57e0360e1ed4662c79a8eced62814cb35146c0dba40123e4,PodSandboxId:25e766a6006484584935633eff06c186880f1751c10365776bd6e07ac9e5a007,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727097883221568719,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 356d5ed4adf1a57c3ad8edf8e104c7f7,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692d9ab32ac920c03e99a206737a75e8f420c2aa3047b251a9e76a8feefa6d7c,PodSandboxId:6f6f297d2bf0b1ed1d4a99885a76e9163ea663e071b08c4b311ac857f507925f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727097883164371598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f132eb7802a4272b67ff520aaf3e0c91,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e41f18e1-8052-42e1-abeb-445fd63a254c name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:33:10 multinode-851928 crio[2731]: time="2024-09-23 13:33:10.667156051Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f7ba4a6c-f84c-4708-9417-7e5083f1e58c name=/runtime.v1.RuntimeService/Version
	Sep 23 13:33:10 multinode-851928 crio[2731]: time="2024-09-23 13:33:10.667236391Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f7ba4a6c-f84c-4708-9417-7e5083f1e58c name=/runtime.v1.RuntimeService/Version
	Sep 23 13:33:10 multinode-851928 crio[2731]: time="2024-09-23 13:33:10.668409199Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=49067aae-a2e9-4dfe-a578-80d163a9088b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:33:10 multinode-851928 crio[2731]: time="2024-09-23 13:33:10.668881457Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098390668853753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=49067aae-a2e9-4dfe-a578-80d163a9088b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:33:10 multinode-851928 crio[2731]: time="2024-09-23 13:33:10.669448150Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6ba86f3-a6a4-4b87-9588-66bb2bec0905 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:33:10 multinode-851928 crio[2731]: time="2024-09-23 13:33:10.669513553Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6ba86f3-a6a4-4b87-9588-66bb2bec0905 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:33:10 multinode-851928 crio[2731]: time="2024-09-23 13:33:10.669927209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:359df88602eac689ea6f8a2d7a9975a0547e1bed39f60881b82a92736e6cb009,PodSandboxId:f6165aa9f5232cfa3983ad7ba5f2b01443acb0172047f93b410cfee89ed7e6c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727098325018657974,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gl4bk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cd876f4-a4fd-466e-a84c-151e00179085,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99756f6019b4d7b958225ab6f9327a6b8c9203fbf3a2d830b5062cdd86647ba,PodSandboxId:ca0338566743328d45d72d7892aec5e33c54ac1153f64b0d8b1e540310a4ac9d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727098291493819395,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c8x2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11cb938f-96d8-4fc5-bec8-345a527fa45c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654185fbfcb03063b85dbe6773f55ad2831999a20283aff14d6229a7ce62f672,PodSandboxId:88dbc26750f98450a4227b6447782034ac35694dfbd57bd8a44b24bc4e3b3a16,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727098291533214262,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vwqlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 836d6606-1f23-4ad0-920e-4a58493501d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db6c0468a85317df9394dd318f8bc43400bc0ce88d7077411f6b535fd107e48,PodSandboxId:7a8400923cb977887a353ac861412e9718431dd173807754346e28fbbf73f550,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727098291368736908,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s52gf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc994d5-3cb5-4463-966b-b32c85869126,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de6979fac3f5b0c2eaef8551a532606c73e38cd83f774096644077a457b0c1ff,PodSandboxId:a4037f5308176d337b2deca77b44b8af2c73565b79e10ee4714a1a5d145710ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727098291320194016,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebe9874e-9033-47fb-a3a4-bce0f18c688e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ac0945c09a0432f1c1e1b73c691250779a94d3a34f10a893924ea884b2b3d0,PodSandboxId:d71d835adcd1d15844807d2a291e6fef6cb20e1b18b5172ee25e4cbbbac47f3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727098287473406807,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02835a6cac33d484359446ef024faa27,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24ae473221e9e088de9eef2ef703d5d3f0766f4014f7fbc1d037c679d3e2baac,PodSandboxId:347f0941c6625f27283194de8d4fd32006b3d380888a85678b2f8a7063e5aa4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727098287475020753,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 878441eec194631b842ef5e820e0ff09,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53e3cea42ba31eed103d200b57d82aaa7e8e100a2266c766104a3a52b620a95f,PodSandboxId:828939191cba88d1217296fdda58434ecfb1563b0377c4b0b25bbce93519acfb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727098287437499929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 356d5ed4adf1a57c3ad8edf8e104c7f7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e4fac777cc7ccf86d00ee7e26a7e351940e38e66d7ed676e56f1c859842e6bf,PodSandboxId:055632a81dcd102f90add8d8a980bfe8dc44e947f121f368e0a4854dc05c8b58,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727098287404054814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f132eb7802a4272b67ff520aaf3e0c91,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c269baf9d7255a9bfe6cfce34bb0d26bc2c217b16c3c6150bd0199bb43fe0fd,PodSandboxId:68ce621d149dbab89bbf1d40250aeafe88c567a5cda763b753d58dbe7b5983bb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727097962330352006,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gl4bk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cd876f4-a4fd-466e-a84c-151e00179085,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ee062c82e9627964304fba440efa3b6e5b3d497a3f92f9e9222fc249896983,PodSandboxId:7e8debb3a7a32a909f49fdbb6b4e160cf52736f78ed06f1b8109978ac36d9d4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727097906078294996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vwqlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 836d6606-1f23-4ad0-920e-4a58493501d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b801f8d1903dcc317703d0c9c8339254a8589f5c6b3f839975618b27f22cae2,PodSandboxId:93eb120762a7ecc71cd179f3bdad4a6269c404b8b84f7807e052cc87f2cbe855,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727097906023278582,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: ebe9874e-9033-47fb-a3a4-bce0f18c688e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56cde957d502ee7288dee888b768f3cff4ccf17d74731851e7bbb81a0e5a5d7f,PodSandboxId:6d7dd123595bb380fa71402185f5498f549c3dd5b06c7706f0731b2908d24371,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727097894067120166,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c8x2d,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 11cb938f-96d8-4fc5-bec8-345a527fa45c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618cba5848a3cc3bd892bbb9e1cade2bdfa5035a1d7614c5a697351c7cf6b194,PodSandboxId:799bf1c098365a9033865dcd72e0da7b176b1f4120dc73536f70e4e1709169ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727097893819208405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s52gf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc994d5-3cb5-4463-966b
-b32c85869126,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f70273abbde20fa97dc324c2b48d24df3559f02d2199042fbb7b615ac8c379c,PodSandboxId:ce7bd0baa5c47857eca3027f914a72474f6728c7a0efb049b7d513de9c55b8f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727097883225903642,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02835a6cac33d484359446ef024faa27,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306c5ac12948941777bcc8958b4a6ed737c7f0b3c6501816a604e4fb0da5fe16,PodSandboxId:8df9ec5d7f3b25e1c83cdfa42036161a0e93ee475c14e4a554ada703c1ce083a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727097883190370572,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 878441eec194631b842ef5e820e0ff09,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec587e30a7bb93e57e0360e1ed4662c79a8eced62814cb35146c0dba40123e4,PodSandboxId:25e766a6006484584935633eff06c186880f1751c10365776bd6e07ac9e5a007,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727097883221568719,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 356d5ed4adf1a57c3ad8edf8e104c7f7,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692d9ab32ac920c03e99a206737a75e8f420c2aa3047b251a9e76a8feefa6d7c,PodSandboxId:6f6f297d2bf0b1ed1d4a99885a76e9163ea663e071b08c4b311ac857f507925f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727097883164371598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f132eb7802a4272b67ff520aaf3e0c91,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6ba86f3-a6a4-4b87-9588-66bb2bec0905 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	359df88602eac       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   f6165aa9f5232       busybox-7dff88458-gl4bk
	654185fbfcb03       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   1                   88dbc26750f98       coredns-7c65d6cfc9-vwqlq
	d99756f6019b4       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   ca03385667433       kindnet-c8x2d
	1db6c0468a853       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                1                   7a8400923cb97       kube-proxy-s52gf
	de6979fac3f5b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   a4037f5308176       storage-provisioner
	24ae473221e9e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   347f0941c6625       etcd-multinode-851928
	f9ac0945c09a0       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      About a minute ago   Running             kube-scheduler            1                   d71d835adcd1d       kube-scheduler-multinode-851928
	53e3cea42ba31       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   1                   828939191cba8       kube-controller-manager-multinode-851928
	9e4fac777cc7c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            1                   055632a81dcd1       kube-apiserver-multinode-851928
	3c269baf9d725       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   68ce621d149db       busybox-7dff88458-gl4bk
	f3ee062c82e96       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      8 minutes ago        Exited              coredns                   0                   7e8debb3a7a32       coredns-7c65d6cfc9-vwqlq
	1b801f8d1903d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   93eb120762a7e       storage-provisioner
	56cde957d502e       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      8 minutes ago        Exited              kindnet-cni               0                   6d7dd123595bb       kindnet-c8x2d
	618cba5848a3c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      8 minutes ago        Exited              kube-proxy                0                   799bf1c098365       kube-proxy-s52gf
	0f70273abbde2       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      8 minutes ago        Exited              kube-scheduler            0                   ce7bd0baa5c47       kube-scheduler-multinode-851928
	eec587e30a7bb       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      8 minutes ago        Exited              kube-controller-manager   0                   25e766a600648       kube-controller-manager-multinode-851928
	306c5ac129489       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   8df9ec5d7f3b2       etcd-multinode-851928
	692d9ab32ac92       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      8 minutes ago        Exited              kube-apiserver            0                   6f6f297d2bf0b       kube-apiserver-multinode-851928
	
	
	==> coredns [654185fbfcb03063b85dbe6773f55ad2831999a20283aff14d6229a7ce62f672] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44144 - 31565 "HINFO IN 2076000021483381523.7966737289315741758. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015286557s
	
	
	==> coredns [f3ee062c82e9627964304fba440efa3b6e5b3d497a3f92f9e9222fc249896983] <==
	[INFO] 10.244.0.3:55317 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002277841s
	[INFO] 10.244.0.3:53261 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000147555s
	[INFO] 10.244.0.3:47932 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000082196s
	[INFO] 10.244.0.3:48404 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001589907s
	[INFO] 10.244.0.3:53795 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00008073s
	[INFO] 10.244.0.3:47751 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082714s
	[INFO] 10.244.0.3:46818 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072306s
	[INFO] 10.244.1.2:46213 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167812s
	[INFO] 10.244.1.2:60560 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104495s
	[INFO] 10.244.1.2:52294 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085458s
	[INFO] 10.244.1.2:37666 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119278s
	[INFO] 10.244.0.3:39344 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102984s
	[INFO] 10.244.0.3:56258 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147189s
	[INFO] 10.244.0.3:44491 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088834s
	[INFO] 10.244.0.3:60210 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072307s
	[INFO] 10.244.1.2:38393 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134145s
	[INFO] 10.244.1.2:39793 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000226028s
	[INFO] 10.244.1.2:46395 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000140061s
	[INFO] 10.244.1.2:56458 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000125366s
	[INFO] 10.244.0.3:46222 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128771s
	[INFO] 10.244.0.3:48880 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000064585s
	[INFO] 10.244.0.3:51024 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074571s
	[INFO] 10.244.0.3:35502 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000055063s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-851928
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-851928
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=multinode-851928
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T13_24_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 13:24:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-851928
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:33:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:31:30 +0000   Mon, 23 Sep 2024 13:24:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:31:30 +0000   Mon, 23 Sep 2024 13:24:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:31:30 +0000   Mon, 23 Sep 2024 13:24:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:31:30 +0000   Mon, 23 Sep 2024 13:25:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.168
	  Hostname:    multinode-851928
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 12f97256f1164843ab6c37f2bd6746c2
	  System UUID:                12f97256-f116-4843-ab6c-37f2bd6746c2
	  Boot ID:                    f4ef7a41-b130-453c-b780-b9b1171eb465
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gl4bk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m12s
	  kube-system                 coredns-7c65d6cfc9-vwqlq                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m17s
	  kube-system                 etcd-multinode-851928                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m22s
	  kube-system                 kindnet-c8x2d                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m17s
	  kube-system                 kube-apiserver-multinode-851928             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 kube-controller-manager-multinode-851928    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 kube-proxy-s52gf                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m17s
	  kube-system                 kube-scheduler-multinode-851928             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m16s                  kube-proxy       
	  Normal  Starting                 99s                    kube-proxy       
	  Normal  Starting                 8m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m28s (x8 over 8m29s)  kubelet          Node multinode-851928 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m28s (x8 over 8m29s)  kubelet          Node multinode-851928 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m28s (x7 over 8m29s)  kubelet          Node multinode-851928 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m22s                  kubelet          Node multinode-851928 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  8m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m22s                  kubelet          Node multinode-851928 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m22s                  kubelet          Node multinode-851928 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m22s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m18s                  node-controller  Node multinode-851928 event: Registered Node multinode-851928 in Controller
	  Normal  NodeReady                8m5s                   kubelet          Node multinode-851928 status is now: NodeReady
	  Normal  Starting                 104s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  104s (x8 over 104s)    kubelet          Node multinode-851928 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s (x8 over 104s)    kubelet          Node multinode-851928 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s (x7 over 104s)    kubelet          Node multinode-851928 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  104s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           97s                    node-controller  Node multinode-851928 event: Registered Node multinode-851928 in Controller
	
	
	Name:               multinode-851928-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-851928-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=multinode-851928
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T13_32_09_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 13:32:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-851928-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:33:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:32:39 +0000   Mon, 23 Sep 2024 13:32:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:32:39 +0000   Mon, 23 Sep 2024 13:32:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:32:39 +0000   Mon, 23 Sep 2024 13:32:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:32:39 +0000   Mon, 23 Sep 2024 13:32:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.25
	  Hostname:    multinode-851928-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2078f891f8d843f0b52842500fda8541
	  System UUID:                2078f891-f8d8-43f0-b528-42500fda8541
	  Boot ID:                    cd73d277-2245-4a26-8011-80494fd2b5ff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-zrc2v    0 (0%)        0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 kindnet-wxjn6              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m36s
	  kube-system                 kube-proxy-tbjrf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m29s                  kube-proxy  
	  Normal  Starting                 57s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m36s (x2 over 7m36s)  kubelet     Node multinode-851928-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m36s (x2 over 7m36s)  kubelet     Node multinode-851928-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m36s (x2 over 7m36s)  kubelet     Node multinode-851928-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m36s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m15s                  kubelet     Node multinode-851928-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  63s (x2 over 63s)      kubelet     Node multinode-851928-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x2 over 63s)      kubelet     Node multinode-851928-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x2 over 63s)      kubelet     Node multinode-851928-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  63s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                43s                    kubelet     Node multinode-851928-m02 status is now: NodeReady
	
	
	Name:               multinode-851928-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-851928-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=multinode-851928
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T13_32_49_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 13:32:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-851928-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:33:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:33:07 +0000   Mon, 23 Sep 2024 13:32:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:33:07 +0000   Mon, 23 Sep 2024 13:32:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:33:07 +0000   Mon, 23 Sep 2024 13:32:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:33:07 +0000   Mon, 23 Sep 2024 13:33:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.173
	  Hostname:    multinode-851928-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3b3aa87716ad469e846d72f10a2bc59a
	  System UUID:                3b3aa877-16ad-469e-846d-72f10a2bc59a
	  Boot ID:                    84333b9b-6bbf-4f25-9809-6ee7ae5c9e2d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-w8srs       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m37s
	  kube-system                 kube-proxy-vx85t    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m40s                  kube-proxy       
	  Normal  Starting                 6m31s                  kube-proxy       
	  Normal  Starting                 18s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  6m37s (x2 over 6m37s)  kubelet          Node multinode-851928-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m37s (x2 over 6m37s)  kubelet          Node multinode-851928-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m37s (x2 over 6m37s)  kubelet          Node multinode-851928-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m17s                  kubelet          Node multinode-851928-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m46s (x2 over 5m46s)  kubelet          Node multinode-851928-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m46s (x2 over 5m46s)  kubelet          Node multinode-851928-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m46s (x2 over 5m46s)  kubelet          Node multinode-851928-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m26s                  kubelet          Node multinode-851928-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet          Node multinode-851928-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet          Node multinode-851928-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet          Node multinode-851928-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                    node-controller  Node multinode-851928-m03 event: Registered Node multinode-851928-m03 in Controller
	  Normal  NodeReady                4s                     kubelet          Node multinode-851928-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.064556] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055603] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.197628] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.126284] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.295588] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.943745] systemd-fstab-generator[741]: Ignoring "noauto" option for root device
	[  +4.026209] systemd-fstab-generator[872]: Ignoring "noauto" option for root device
	[  +0.058691] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.005525] systemd-fstab-generator[1213]: Ignoring "noauto" option for root device
	[  +0.085315] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.348922] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.306699] systemd-fstab-generator[1421]: Ignoring "noauto" option for root device
	[Sep23 13:25] kauditd_printk_skb: 60 callbacks suppressed
	[ +53.153149] kauditd_printk_skb: 12 callbacks suppressed
	[Sep23 13:31] systemd-fstab-generator[2655]: Ignoring "noauto" option for root device
	[  +0.145002] systemd-fstab-generator[2667]: Ignoring "noauto" option for root device
	[  +0.172038] systemd-fstab-generator[2681]: Ignoring "noauto" option for root device
	[  +0.135290] systemd-fstab-generator[2693]: Ignoring "noauto" option for root device
	[  +0.290138] systemd-fstab-generator[2722]: Ignoring "noauto" option for root device
	[  +0.776983] systemd-fstab-generator[2810]: Ignoring "noauto" option for root device
	[  +2.167724] systemd-fstab-generator[2931]: Ignoring "noauto" option for root device
	[  +4.696980] kauditd_printk_skb: 184 callbacks suppressed
	[  +5.875291] kauditd_printk_skb: 34 callbacks suppressed
	[  +7.270728] systemd-fstab-generator[3785]: Ignoring "noauto" option for root device
	[Sep23 13:32] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [24ae473221e9e088de9eef2ef703d5d3f0766f4014f7fbc1d037c679d3e2baac] <==
	{"level":"info","ts":"2024-09-23T13:31:27.997644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 switched to configuration voters=(16379515494576287720)"}
	{"level":"info","ts":"2024-09-23T13:31:27.997745Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f729467791c9db0d","local-member-id":"e34fba8f5739efe8","added-peer-id":"e34fba8f5739efe8","added-peer-peer-urls":["https://192.168.39.168:2380"]}
	{"level":"info","ts":"2024-09-23T13:31:27.997888Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f729467791c9db0d","local-member-id":"e34fba8f5739efe8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:31:27.997931Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:31:28.004251Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-23T13:31:28.004650Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"e34fba8f5739efe8","initial-advertise-peer-urls":["https://192.168.39.168:2380"],"listen-peer-urls":["https://192.168.39.168:2380"],"advertise-client-urls":["https://192.168.39.168:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.168:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-23T13:31:28.004696Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-23T13:31:28.004807Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.168:2380"}
	{"level":"info","ts":"2024-09-23T13:31:28.004828Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.168:2380"}
	{"level":"info","ts":"2024-09-23T13:31:29.034194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-23T13:31:29.034324Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-23T13:31:29.034393Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 received MsgPreVoteResp from e34fba8f5739efe8 at term 2"}
	{"level":"info","ts":"2024-09-23T13:31:29.034430Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 became candidate at term 3"}
	{"level":"info","ts":"2024-09-23T13:31:29.034455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 received MsgVoteResp from e34fba8f5739efe8 at term 3"}
	{"level":"info","ts":"2024-09-23T13:31:29.034481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 became leader at term 3"}
	{"level":"info","ts":"2024-09-23T13:31:29.034510Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e34fba8f5739efe8 elected leader e34fba8f5739efe8 at term 3"}
	{"level":"info","ts":"2024-09-23T13:31:29.041301Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e34fba8f5739efe8","local-member-attributes":"{Name:multinode-851928 ClientURLs:[https://192.168.39.168:2379]}","request-path":"/0/members/e34fba8f5739efe8/attributes","cluster-id":"f729467791c9db0d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T13:31:29.041416Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T13:31:29.041441Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T13:31:29.042167Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T13:31:29.042243Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T13:31:29.042941Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T13:31:29.043193Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T13:31:29.043764Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.168:2379"}
	{"level":"info","ts":"2024-09-23T13:31:29.043932Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [306c5ac12948941777bcc8958b4a6ed737c7f0b3c6501816a604e4fb0da5fe16] <==
	{"level":"info","ts":"2024-09-23T13:24:44.573381Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T13:24:44.575474Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.168:2379"}
	{"level":"info","ts":"2024-09-23T13:24:44.573417Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f729467791c9db0d","local-member-id":"e34fba8f5739efe8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:24:44.583668Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:24:44.583718Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:25:35.276328Z","caller":"traceutil/trace.go:171","msg":"trace[486577434] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"233.284853ms","start":"2024-09-23T13:25:35.043027Z","end":"2024-09-23T13:25:35.276312Z","steps":["trace[486577434] 'process raft request'  (duration: 213.187421ms)","trace[486577434] 'compare'  (duration: 19.944682ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T13:25:35.277038Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.241161ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-851928-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T13:25:35.277095Z","caller":"traceutil/trace.go:171","msg":"trace[1220359028] range","detail":"{range_begin:/registry/minions/multinode-851928-m02; range_end:; response_count:0; response_revision:472; }","duration":"161.370509ms","start":"2024-09-23T13:25:35.115715Z","end":"2024-09-23T13:25:35.277086Z","steps":["trace[1220359028] 'agreement among raft nodes before linearized reading'  (duration: 161.148351ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T13:25:35.277290Z","caller":"traceutil/trace.go:171","msg":"trace[911968721] linearizableReadLoop","detail":"{readStateIndex:492; appliedIndex:491; }","duration":"160.536747ms","start":"2024-09-23T13:25:35.115721Z","end":"2024-09-23T13:25:35.276258Z","steps":["trace[911968721] 'read index received'  (duration: 140.456701ms)","trace[911968721] 'applied index is now lower than readState.Index'  (duration: 20.079109ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T13:26:34.717908Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.480109ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17287227831743210186 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-851928-m03.17f7e279b440da84\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-851928-m03.17f7e279b440da84\" value_size:642 lease:8063855794888434060 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-23T13:26:34.718049Z","caller":"traceutil/trace.go:171","msg":"trace[1960587685] linearizableReadLoop","detail":"{readStateIndex:644; appliedIndex:643; }","duration":"196.913721ms","start":"2024-09-23T13:26:34.521105Z","end":"2024-09-23T13:26:34.718019Z","steps":["trace[1960587685] 'read index received'  (duration: 84.531511ms)","trace[1960587685] 'applied index is now lower than readState.Index'  (duration: 112.381352ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T13:26:34.718116Z","caller":"traceutil/trace.go:171","msg":"trace[1188400364] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"237.415432ms","start":"2024-09-23T13:26:34.480685Z","end":"2024-09-23T13:26:34.718101Z","steps":["trace[1188400364] 'process raft request'  (duration: 124.969317ms)","trace[1188400364] 'compare'  (duration: 111.157939ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T13:26:34.718470Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.372899ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-851928-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T13:26:34.718518Z","caller":"traceutil/trace.go:171","msg":"trace[1742467683] range","detail":"{range_begin:/registry/minions/multinode-851928-m03; range_end:; response_count:0; response_revision:610; }","duration":"197.422802ms","start":"2024-09-23T13:26:34.521083Z","end":"2024-09-23T13:26:34.718505Z","steps":["trace[1742467683] 'agreement among raft nodes before linearized reading'  (duration: 197.35752ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T13:26:36.497309Z","caller":"traceutil/trace.go:171","msg":"trace[1319836794] transaction","detail":"{read_only:false; response_revision:636; number_of_response:1; }","duration":"163.638701ms","start":"2024-09-23T13:26:36.333657Z","end":"2024-09-23T13:26:36.497296Z","steps":["trace[1319836794] 'process raft request'  (duration: 163.529076ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T13:29:51.493822Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-23T13:29:51.493932Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-851928","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.168:2380"],"advertise-client-urls":["https://192.168.39.168:2379"]}
	{"level":"warn","ts":"2024-09-23T13:29:51.494035Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T13:29:51.494140Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T13:29:51.554534Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.168:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T13:29:51.554658Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.168:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-23T13:29:51.554825Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e34fba8f5739efe8","current-leader-member-id":"e34fba8f5739efe8"}
	{"level":"info","ts":"2024-09-23T13:29:51.557947Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.168:2380"}
	{"level":"info","ts":"2024-09-23T13:29:51.558122Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.168:2380"}
	{"level":"info","ts":"2024-09-23T13:29:51.558170Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-851928","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.168:2380"],"advertise-client-urls":["https://192.168.39.168:2379"]}
	
	
	==> kernel <==
	 13:33:11 up 8 min,  0 users,  load average: 0.56, 0.45, 0.21
	Linux multinode-851928 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [56cde957d502ee7288dee888b768f3cff4ccf17d74731851e7bbb81a0e5a5d7f] <==
	I0923 13:29:05.154837       1 main.go:322] Node multinode-851928-m03 has CIDR [10.244.3.0/24] 
	I0923 13:29:15.155424       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0923 13:29:15.155469       1 main.go:299] handling current node
	I0923 13:29:15.155517       1 main.go:295] Handling node with IPs: map[192.168.39.25:{}]
	I0923 13:29:15.155523       1 main.go:322] Node multinode-851928-m02 has CIDR [10.244.1.0/24] 
	I0923 13:29:15.155720       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0923 13:29:15.155746       1 main.go:322] Node multinode-851928-m03 has CIDR [10.244.3.0/24] 
	I0923 13:29:25.150504       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0923 13:29:25.150702       1 main.go:322] Node multinode-851928-m03 has CIDR [10.244.3.0/24] 
	I0923 13:29:25.150921       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0923 13:29:25.150957       1 main.go:299] handling current node
	I0923 13:29:25.150985       1 main.go:295] Handling node with IPs: map[192.168.39.25:{}]
	I0923 13:29:25.151006       1 main.go:322] Node multinode-851928-m02 has CIDR [10.244.1.0/24] 
	I0923 13:29:35.154266       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0923 13:29:35.154325       1 main.go:299] handling current node
	I0923 13:29:35.154349       1 main.go:295] Handling node with IPs: map[192.168.39.25:{}]
	I0923 13:29:35.154355       1 main.go:322] Node multinode-851928-m02 has CIDR [10.244.1.0/24] 
	I0923 13:29:35.154492       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0923 13:29:35.154509       1 main.go:322] Node multinode-851928-m03 has CIDR [10.244.3.0/24] 
	I0923 13:29:45.154730       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0923 13:29:45.154903       1 main.go:299] handling current node
	I0923 13:29:45.154970       1 main.go:295] Handling node with IPs: map[192.168.39.25:{}]
	I0923 13:29:45.154992       1 main.go:322] Node multinode-851928-m02 has CIDR [10.244.1.0/24] 
	I0923 13:29:45.155156       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0923 13:29:45.155180       1 main.go:322] Node multinode-851928-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [d99756f6019b4d7b958225ab6f9327a6b8c9203fbf3a2d830b5062cdd86647ba] <==
	I0923 13:32:22.464909       1 main.go:322] Node multinode-851928-m03 has CIDR [10.244.3.0/24] 
	I0923 13:32:32.464358       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0923 13:32:32.464479       1 main.go:299] handling current node
	I0923 13:32:32.464499       1 main.go:295] Handling node with IPs: map[192.168.39.25:{}]
	I0923 13:32:32.464506       1 main.go:322] Node multinode-851928-m02 has CIDR [10.244.1.0/24] 
	I0923 13:32:32.464656       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0923 13:32:32.464679       1 main.go:322] Node multinode-851928-m03 has CIDR [10.244.3.0/24] 
	I0923 13:32:42.465833       1 main.go:295] Handling node with IPs: map[192.168.39.25:{}]
	I0923 13:32:42.465937       1 main.go:322] Node multinode-851928-m02 has CIDR [10.244.1.0/24] 
	I0923 13:32:42.466110       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0923 13:32:42.466243       1 main.go:322] Node multinode-851928-m03 has CIDR [10.244.3.0/24] 
	I0923 13:32:42.466402       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0923 13:32:42.466447       1 main.go:299] handling current node
	I0923 13:32:52.467183       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0923 13:32:52.467307       1 main.go:299] handling current node
	I0923 13:32:52.467337       1 main.go:295] Handling node with IPs: map[192.168.39.25:{}]
	I0923 13:32:52.467358       1 main.go:322] Node multinode-851928-m02 has CIDR [10.244.1.0/24] 
	I0923 13:32:52.467534       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0923 13:32:52.467558       1 main.go:322] Node multinode-851928-m03 has CIDR [10.244.2.0/24] 
	I0923 13:33:02.467455       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0923 13:33:02.467571       1 main.go:299] handling current node
	I0923 13:33:02.467658       1 main.go:295] Handling node with IPs: map[192.168.39.25:{}]
	I0923 13:33:02.467679       1 main.go:322] Node multinode-851928-m02 has CIDR [10.244.1.0/24] 
	I0923 13:33:02.467871       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0923 13:33:02.467922       1 main.go:322] Node multinode-851928-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [692d9ab32ac920c03e99a206737a75e8f420c2aa3047b251a9e76a8feefa6d7c] <==
	I0923 13:29:51.503913       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0923 13:29:51.504002       1 controller.go:157] Shutting down quota evaluator
	I0923 13:29:51.504014       1 controller.go:176] quota evaluator worker shutdown
	I0923 13:29:51.504521       1 controller.go:176] quota evaluator worker shutdown
	I0923 13:29:51.504531       1 controller.go:176] quota evaluator worker shutdown
	I0923 13:29:51.504535       1 controller.go:176] quota evaluator worker shutdown
	I0923 13:29:51.504540       1 controller.go:176] quota evaluator worker shutdown
	I0923 13:29:51.505805       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0923 13:29:51.507453       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0923 13:29:51.507566       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0923 13:29:51.507660       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0923 13:29:51.507859       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0923 13:29:51.507898       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0923 13:29:51.509543       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0923 13:29:51.509727       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0923 13:29:51.509799       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0923 13:29:51.511672       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	W0923 13:29:51.520694       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:29:51.520764       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:29:51.520800       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:29:51.520834       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:29:51.520865       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:29:51.520912       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:29:51.520943       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:29:51.520991       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [9e4fac777cc7ccf86d00ee7e26a7e351940e38e66d7ed676e56f1c859842e6bf] <==
	I0923 13:31:30.433980       1 aggregator.go:171] initial CRD sync complete...
	I0923 13:31:30.434000       1 autoregister_controller.go:144] Starting autoregister controller
	I0923 13:31:30.434007       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0923 13:31:30.435863       1 shared_informer.go:320] Caches are synced for configmaps
	I0923 13:31:30.470842       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0923 13:31:30.476161       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0923 13:31:30.476207       1 policy_source.go:224] refreshing policies
	I0923 13:31:30.480670       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0923 13:31:30.480757       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0923 13:31:30.480893       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0923 13:31:30.482649       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0923 13:31:30.482720       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0923 13:31:30.482859       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0923 13:31:30.486685       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0923 13:31:30.494374       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0923 13:31:30.536289       1 cache.go:39] Caches are synced for autoregister controller
	I0923 13:31:30.536942       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0923 13:31:31.298719       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0923 13:31:32.759918       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0923 13:31:32.892756       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0923 13:31:32.913910       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0923 13:31:33.036399       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0923 13:31:33.051938       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0923 13:31:33.933328       1 controller.go:615] quota admission added evaluator for: endpoints
	I0923 13:31:34.136237       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [53e3cea42ba31eed103d200b57d82aaa7e8e100a2266c766104a3a52b620a95f] <==
	I0923 13:32:28.698774       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m02"
	I0923 13:32:28.706438       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="60.375µs"
	I0923 13:32:28.721721       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="89.058µs"
	I0923 13:32:28.884305       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m02"
	I0923 13:32:32.499322       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="6.832713ms"
	I0923 13:32:32.499932       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.208µs"
	I0923 13:32:38.996421       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m02"
	I0923 13:32:47.681787       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:32:47.723348       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:32:47.937510       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:32:47.937772       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851928-m02"
	I0923 13:32:48.939044       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851928-m02"
	I0923 13:32:48.939427       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-851928-m03\" does not exist"
	I0923 13:32:48.948377       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-851928-m03" podCIDRs=["10.244.2.0/24"]
	I0923 13:32:48.948420       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:32:48.948531       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:32:48.965115       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:32:48.979644       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:32:49.332257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:32:53.980077       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:32:59.082126       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:33:07.775515       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851928-m02"
	I0923 13:33:07.775675       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:33:07.792189       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:33:08.903197       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	
	
	==> kube-controller-manager [eec587e30a7bb93e57e0360e1ed4662c79a8eced62814cb35146c0dba40123e4] <==
	I0923 13:27:24.413888       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:27:24.660152       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851928-m02"
	I0923 13:27:24.660955       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:27:25.901848       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-851928-m03\" does not exist"
	I0923 13:27:25.902178       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851928-m02"
	I0923 13:27:25.925450       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-851928-m03" podCIDRs=["10.244.3.0/24"]
	I0923 13:27:25.925721       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:27:25.926859       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:27:26.215847       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:27:26.560219       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:27:27.642182       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:27:36.038126       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:27:45.983357       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:27:45.983675       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851928-m02"
	I0923 13:27:45.995332       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:27:47.590984       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:28:27.609939       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:28:27.610801       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851928-m02"
	I0923 13:28:27.634908       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:28:32.655476       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m02"
	I0923 13:28:32.670522       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:28:32.676955       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m02"
	I0923 13:28:32.710374       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.41328ms"
	I0923 13:28:32.710624       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="103.574µs"
	I0923 13:28:42.755074       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m02"
	
	
	==> kube-proxy [1db6c0468a85317df9394dd318f8bc43400bc0ce88d7077411f6b535fd107e48] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 13:31:31.818141       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 13:31:31.828663       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.168"]
	E0923 13:31:31.828878       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 13:31:31.886990       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 13:31:31.887130       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 13:31:31.887209       1 server_linux.go:169] "Using iptables Proxier"
	I0923 13:31:31.891181       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 13:31:31.891762       1 server.go:483] "Version info" version="v1.31.1"
	I0923 13:31:31.891961       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:31:31.893536       1 config.go:199] "Starting service config controller"
	I0923 13:31:31.893636       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 13:31:31.893668       1 config.go:105] "Starting endpoint slice config controller"
	I0923 13:31:31.893684       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 13:31:31.894217       1 config.go:328] "Starting node config controller"
	I0923 13:31:31.894239       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 13:31:31.994208       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 13:31:31.994249       1 shared_informer.go:320] Caches are synced for service config
	I0923 13:31:31.994500       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [618cba5848a3cc3bd892bbb9e1cade2bdfa5035a1d7614c5a697351c7cf6b194] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 13:24:54.178352       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 13:24:54.212118       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.168"]
	E0923 13:24:54.212218       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 13:24:54.266356       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 13:24:54.266397       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 13:24:54.266420       1 server_linux.go:169] "Using iptables Proxier"
	I0923 13:24:54.269106       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 13:24:54.269408       1 server.go:483] "Version info" version="v1.31.1"
	I0923 13:24:54.269430       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:24:54.270804       1 config.go:199] "Starting service config controller"
	I0923 13:24:54.270839       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 13:24:54.270869       1 config.go:105] "Starting endpoint slice config controller"
	I0923 13:24:54.270885       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 13:24:54.271353       1 config.go:328] "Starting node config controller"
	I0923 13:24:54.271380       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 13:24:54.371619       1 shared_informer.go:320] Caches are synced for service config
	I0923 13:24:54.371706       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 13:24:54.371532       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0f70273abbde20fa97dc324c2b48d24df3559f02d2199042fbb7b615ac8c379c] <==
	E0923 13:24:45.873759       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:45.873851       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 13:24:45.873874       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:45.873966       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 13:24:45.874029       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:45.874177       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 13:24:45.874240       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0923 13:24:46.703431       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 13:24:46.703487       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:46.717006       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 13:24:46.717061       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:46.726183       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 13:24:46.726281       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:46.867734       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 13:24:46.867783       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:47.007746       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 13:24:47.007800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:47.044669       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 13:24:47.044714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:47.150710       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 13:24:47.150857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:47.466480       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 13:24:47.467022       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0923 13:24:49.561328       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0923 13:29:51.506138       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f9ac0945c09a0432f1c1e1b73c691250779a94d3a34f10a893924ea884b2b3d0] <==
	W0923 13:31:30.425306       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 13:31:30.428217       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:31:30.425359       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 13:31:30.428271       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 13:31:30.425404       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 13:31:30.428324       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 13:31:30.425488       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 13:31:30.428376       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:31:30.425539       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 13:31:30.428428       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:31:30.427713       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 13:31:30.428485       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 13:31:30.427784       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 13:31:30.428539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:31:30.427846       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 13:31:30.428649       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:31:30.427894       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 13:31:30.428710       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:31:30.427940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 13:31:30.428760       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:31:30.427990       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 13:31:30.428812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:31:30.428841       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 13:31:30.428875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0923 13:31:31.599289       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 13:31:36 multinode-851928 kubelet[2938]: E0923 13:31:36.848824    2938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098296848374866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:31:37 multinode-851928 kubelet[2938]: I0923 13:31:37.050768    2938 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 23 13:31:46 multinode-851928 kubelet[2938]: E0923 13:31:46.850085    2938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098306849803763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:31:46 multinode-851928 kubelet[2938]: E0923 13:31:46.850475    2938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098306849803763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:31:56 multinode-851928 kubelet[2938]: E0923 13:31:56.852134    2938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098316851552462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:31:56 multinode-851928 kubelet[2938]: E0923 13:31:56.852451    2938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098316851552462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:32:06 multinode-851928 kubelet[2938]: E0923 13:32:06.859405    2938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098326856114942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:32:06 multinode-851928 kubelet[2938]: E0923 13:32:06.859747    2938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098326856114942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:32:16 multinode-851928 kubelet[2938]: E0923 13:32:16.861858    2938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098336861497925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:32:16 multinode-851928 kubelet[2938]: E0923 13:32:16.862205    2938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098336861497925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:32:26 multinode-851928 kubelet[2938]: E0923 13:32:26.798649    2938 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 13:32:26 multinode-851928 kubelet[2938]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 13:32:26 multinode-851928 kubelet[2938]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 13:32:26 multinode-851928 kubelet[2938]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 13:32:26 multinode-851928 kubelet[2938]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 13:32:26 multinode-851928 kubelet[2938]: E0923 13:32:26.864138    2938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098346863786716,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:32:26 multinode-851928 kubelet[2938]: E0923 13:32:26.864183    2938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098346863786716,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:32:36 multinode-851928 kubelet[2938]: E0923 13:32:36.865387    2938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098356865095939,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:32:36 multinode-851928 kubelet[2938]: E0923 13:32:36.865821    2938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098356865095939,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:32:46 multinode-851928 kubelet[2938]: E0923 13:32:46.869659    2938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098366868695200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:32:46 multinode-851928 kubelet[2938]: E0923 13:32:46.869794    2938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098366868695200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:32:56 multinode-851928 kubelet[2938]: E0923 13:32:56.874474    2938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098376874157568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:32:56 multinode-851928 kubelet[2938]: E0923 13:32:56.874550    2938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098376874157568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:33:06 multinode-851928 kubelet[2938]: E0923 13:33:06.878371    2938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098386878056016,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:33:06 multinode-851928 kubelet[2938]: E0923 13:33:06.878740    2938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098386878056016,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 13:33:10.268479  701449 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19690-662205/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-851928 -n multinode-851928
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-851928 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (323.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (144.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 stop
E0923 13:33:32.250032  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-851928 stop: exit status 82 (2m0.488777989s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-851928-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-851928 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 status
E0923 13:35:29.179269  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-851928 status: (18.755403384s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 status --alsologtostderr
E0923 13:35:36.850675  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-851928 status --alsologtostderr: (3.360128127s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-851928 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-851928 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-851928 -n multinode-851928
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-851928 logs -n 25: (1.424421878s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-851928 ssh -n                                                                 | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-851928 cp multinode-851928-m02:/home/docker/cp-test.txt                       | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928:/home/docker/cp-test_multinode-851928-m02_multinode-851928.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-851928 ssh -n                                                                 | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-851928 ssh -n multinode-851928 sudo cat                                       | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | /home/docker/cp-test_multinode-851928-m02_multinode-851928.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-851928 cp multinode-851928-m02:/home/docker/cp-test.txt                       | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928-m03:/home/docker/cp-test_multinode-851928-m02_multinode-851928-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-851928 ssh -n                                                                 | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-851928 ssh -n multinode-851928-m03 sudo cat                                   | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | /home/docker/cp-test_multinode-851928-m02_multinode-851928-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-851928 cp testdata/cp-test.txt                                                | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-851928 ssh -n                                                                 | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-851928 cp multinode-851928-m03:/home/docker/cp-test.txt                       | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1094698981/001/cp-test_multinode-851928-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-851928 ssh -n                                                                 | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-851928 cp multinode-851928-m03:/home/docker/cp-test.txt                       | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928:/home/docker/cp-test_multinode-851928-m03_multinode-851928.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-851928 ssh -n                                                                 | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-851928 ssh -n multinode-851928 sudo cat                                       | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | /home/docker/cp-test_multinode-851928-m03_multinode-851928.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-851928 cp multinode-851928-m03:/home/docker/cp-test.txt                       | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928-m02:/home/docker/cp-test_multinode-851928-m03_multinode-851928-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-851928 ssh -n                                                                 | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-851928 ssh -n multinode-851928-m02 sudo cat                                   | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | /home/docker/cp-test_multinode-851928-m03_multinode-851928-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-851928 node stop m03                                                          | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	| node    | multinode-851928 node start                                                             | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-851928                                                                | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC |                     |
	| stop    | -p multinode-851928                                                                     | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC |                     |
	| start   | -p multinode-851928                                                                     | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:29 UTC | 23 Sep 24 13:33 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-851928                                                                | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:33 UTC |                     |
	| node    | multinode-851928 node delete                                                            | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:33 UTC | 23 Sep 24 13:33 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-851928 stop                                                                   | multinode-851928 | jenkins | v1.34.0 | 23 Sep 24 13:33 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 13:29:50
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 13:29:50.670991  700346 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:29:50.671159  700346 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:29:50.671170  700346 out.go:358] Setting ErrFile to fd 2...
	I0923 13:29:50.671174  700346 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:29:50.671356  700346 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-662205/.minikube/bin
	I0923 13:29:50.672020  700346 out.go:352] Setting JSON to false
	I0923 13:29:50.673098  700346 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":11534,"bootTime":1727086657,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 13:29:50.673180  700346 start.go:139] virtualization: kvm guest
	I0923 13:29:50.675424  700346 out.go:177] * [multinode-851928] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 13:29:50.676722  700346 notify.go:220] Checking for updates...
	I0923 13:29:50.676747  700346 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 13:29:50.678331  700346 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:29:50.679738  700346 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 13:29:50.681319  700346 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 13:29:50.682751  700346 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 13:29:50.684091  700346 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 13:29:50.685904  700346 config.go:182] Loaded profile config "multinode-851928": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:29:50.686026  700346 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:29:50.686516  700346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:29:50.686566  700346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:29:50.702387  700346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43801
	I0923 13:29:50.702982  700346 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:29:50.703653  700346 main.go:141] libmachine: Using API Version  1
	I0923 13:29:50.703675  700346 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:29:50.704055  700346 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:29:50.704247  700346 main.go:141] libmachine: (multinode-851928) Calling .DriverName
	I0923 13:29:50.743263  700346 out.go:177] * Using the kvm2 driver based on existing profile
	I0923 13:29:50.744641  700346 start.go:297] selected driver: kvm2
	I0923 13:29:50.744665  700346 start.go:901] validating driver "kvm2" against &{Name:multinode-851928 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-851928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.25 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:29:50.744836  700346 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 13:29:50.745192  700346 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 13:29:50.745281  700346 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19690-662205/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 13:29:50.761702  700346 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 13:29:50.762472  700346 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:29:50.762526  700346 cni.go:84] Creating CNI manager for ""
	I0923 13:29:50.762589  700346 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0923 13:29:50.762669  700346 start.go:340] cluster config:
	{Name:multinode-851928 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-851928 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.25 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:29:50.762819  700346 iso.go:125] acquiring lock: {Name:mkb968a95eae3838cd5c328cf3385c2ef4ff2c8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 13:29:50.764902  700346 out.go:177] * Starting "multinode-851928" primary control-plane node in "multinode-851928" cluster
	I0923 13:29:50.766222  700346 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 13:29:50.766297  700346 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 13:29:50.766309  700346 cache.go:56] Caching tarball of preloaded images
	I0923 13:29:50.766418  700346 preload.go:172] Found /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 13:29:50.766429  700346 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 13:29:50.766585  700346 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/multinode-851928/config.json ...
	I0923 13:29:50.766840  700346 start.go:360] acquireMachinesLock for multinode-851928: {Name:mka98570d4b4becad22300323f1f88e64743eec3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 13:29:50.766894  700346 start.go:364] duration metric: took 31.116µs to acquireMachinesLock for "multinode-851928"
	I0923 13:29:50.766909  700346 start.go:96] Skipping create...Using existing machine configuration
	I0923 13:29:50.766915  700346 fix.go:54] fixHost starting: 
	I0923 13:29:50.767175  700346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:29:50.767209  700346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:29:50.782907  700346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45583
	I0923 13:29:50.783448  700346 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:29:50.783998  700346 main.go:141] libmachine: Using API Version  1
	I0923 13:29:50.784019  700346 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:29:50.784343  700346 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:29:50.784533  700346 main.go:141] libmachine: (multinode-851928) Calling .DriverName
	I0923 13:29:50.784690  700346 main.go:141] libmachine: (multinode-851928) Calling .GetState
	I0923 13:29:50.786380  700346 fix.go:112] recreateIfNeeded on multinode-851928: state=Running err=<nil>
	W0923 13:29:50.786407  700346 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 13:29:50.788544  700346 out.go:177] * Updating the running kvm2 "multinode-851928" VM ...
	I0923 13:29:50.789950  700346 machine.go:93] provisionDockerMachine start ...
	I0923 13:29:50.789981  700346 main.go:141] libmachine: (multinode-851928) Calling .DriverName
	I0923 13:29:50.790263  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHHostname
	I0923 13:29:50.792868  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:50.793337  700346 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:29:50.793361  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:50.793583  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHPort
	I0923 13:29:50.793804  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:29:50.793957  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:29:50.794130  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHUsername
	I0923 13:29:50.794384  700346 main.go:141] libmachine: Using SSH client type: native
	I0923 13:29:50.794596  700346 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0923 13:29:50.794607  700346 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 13:29:50.899330  700346 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-851928
	
	I0923 13:29:50.899369  700346 main.go:141] libmachine: (multinode-851928) Calling .GetMachineName
	I0923 13:29:50.899721  700346 buildroot.go:166] provisioning hostname "multinode-851928"
	I0923 13:29:50.899760  700346 main.go:141] libmachine: (multinode-851928) Calling .GetMachineName
	I0923 13:29:50.899988  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHHostname
	I0923 13:29:50.902733  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:50.903132  700346 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:29:50.903175  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:50.903379  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHPort
	I0923 13:29:50.903606  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:29:50.903781  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:29:50.903879  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHUsername
	I0923 13:29:50.904048  700346 main.go:141] libmachine: Using SSH client type: native
	I0923 13:29:50.904293  700346 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0923 13:29:50.904313  700346 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-851928 && echo "multinode-851928" | sudo tee /etc/hostname
	I0923 13:29:51.024098  700346 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-851928
	
	I0923 13:29:51.024127  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHHostname
	I0923 13:29:51.027053  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:51.027450  700346 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:29:51.027498  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:51.027682  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHPort
	I0923 13:29:51.027891  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:29:51.028076  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:29:51.028341  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHUsername
	I0923 13:29:51.028596  700346 main.go:141] libmachine: Using SSH client type: native
	I0923 13:29:51.028843  700346 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0923 13:29:51.028862  700346 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-851928' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-851928/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-851928' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 13:29:51.131021  700346 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 13:29:51.131054  700346 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19690-662205/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-662205/.minikube}
	I0923 13:29:51.131079  700346 buildroot.go:174] setting up certificates
	I0923 13:29:51.131093  700346 provision.go:84] configureAuth start
	I0923 13:29:51.131108  700346 main.go:141] libmachine: (multinode-851928) Calling .GetMachineName
	I0923 13:29:51.131471  700346 main.go:141] libmachine: (multinode-851928) Calling .GetIP
	I0923 13:29:51.134297  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:51.134688  700346 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:29:51.134715  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:51.134821  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHHostname
	I0923 13:29:51.137369  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:51.137811  700346 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:29:51.137864  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:51.138046  700346 provision.go:143] copyHostCerts
	I0923 13:29:51.138098  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 13:29:51.138169  700346 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem, removing ...
	I0923 13:29:51.138192  700346 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 13:29:51.138311  700346 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem (1082 bytes)
	I0923 13:29:51.138453  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 13:29:51.138491  700346 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem, removing ...
	I0923 13:29:51.138503  700346 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 13:29:51.138550  700346 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem (1123 bytes)
	I0923 13:29:51.138641  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 13:29:51.138669  700346 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem, removing ...
	I0923 13:29:51.138680  700346 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 13:29:51.138721  700346 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem (1675 bytes)
	I0923 13:29:51.138817  700346 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem org=jenkins.multinode-851928 san=[127.0.0.1 192.168.39.168 localhost minikube multinode-851928]
	I0923 13:29:51.206163  700346 provision.go:177] copyRemoteCerts
	I0923 13:29:51.206254  700346 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 13:29:51.206281  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHHostname
	I0923 13:29:51.208845  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:51.209244  700346 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:29:51.209278  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:51.209579  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHPort
	I0923 13:29:51.209825  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:29:51.210069  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHUsername
	I0923 13:29:51.210251  700346 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/multinode-851928/id_rsa Username:docker}
	I0923 13:29:51.292053  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 13:29:51.292129  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 13:29:51.318552  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 13:29:51.318646  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0923 13:29:51.344325  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 13:29:51.344427  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 13:29:51.371790  700346 provision.go:87] duration metric: took 240.68148ms to configureAuth
	I0923 13:29:51.371826  700346 buildroot.go:189] setting minikube options for container-runtime
	I0923 13:29:51.372107  700346 config.go:182] Loaded profile config "multinode-851928": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:29:51.372210  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHHostname
	I0923 13:29:51.375425  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:51.375938  700346 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:29:51.375975  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:29:51.376145  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHPort
	I0923 13:29:51.376416  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:29:51.376607  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:29:51.376782  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHUsername
	I0923 13:29:51.377023  700346 main.go:141] libmachine: Using SSH client type: native
	I0923 13:29:51.377229  700346 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0923 13:29:51.377244  700346 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 13:31:22.203082  700346 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 13:31:22.203121  700346 machine.go:96] duration metric: took 1m31.413151929s to provisionDockerMachine
	I0923 13:31:22.203135  700346 start.go:293] postStartSetup for "multinode-851928" (driver="kvm2")
	I0923 13:31:22.203146  700346 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 13:31:22.203167  700346 main.go:141] libmachine: (multinode-851928) Calling .DriverName
	I0923 13:31:22.203560  700346 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 13:31:22.203601  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHHostname
	I0923 13:31:22.207072  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:31:22.207534  700346 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:31:22.207560  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:31:22.207768  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHPort
	I0923 13:31:22.208013  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:31:22.208168  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHUsername
	I0923 13:31:22.208297  700346 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/multinode-851928/id_rsa Username:docker}
	I0923 13:31:22.289623  700346 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 13:31:22.293999  700346 command_runner.go:130] > NAME=Buildroot
	I0923 13:31:22.294030  700346 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0923 13:31:22.294037  700346 command_runner.go:130] > ID=buildroot
	I0923 13:31:22.294063  700346 command_runner.go:130] > VERSION_ID=2023.02.9
	I0923 13:31:22.294072  700346 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0923 13:31:22.294115  700346 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 13:31:22.294130  700346 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/addons for local assets ...
	I0923 13:31:22.294196  700346 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/files for local assets ...
	I0923 13:31:22.294299  700346 filesync.go:149] local asset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> 6694472.pem in /etc/ssl/certs
	I0923 13:31:22.294316  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> /etc/ssl/certs/6694472.pem
	I0923 13:31:22.294403  700346 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 13:31:22.304117  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 13:31:22.327851  700346 start.go:296] duration metric: took 124.697164ms for postStartSetup
	I0923 13:31:22.327905  700346 fix.go:56] duration metric: took 1m31.560989633s for fixHost
	I0923 13:31:22.327937  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHHostname
	I0923 13:31:22.331086  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:31:22.331506  700346 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:31:22.331553  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:31:22.331646  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHPort
	I0923 13:31:22.331862  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:31:22.332012  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:31:22.332258  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHUsername
	I0923 13:31:22.332481  700346 main.go:141] libmachine: Using SSH client type: native
	I0923 13:31:22.332670  700346 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0923 13:31:22.332681  700346 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 13:31:22.434552  700346 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727098282.409817624
	
	I0923 13:31:22.434582  700346 fix.go:216] guest clock: 1727098282.409817624
	I0923 13:31:22.434593  700346 fix.go:229] Guest: 2024-09-23 13:31:22.409817624 +0000 UTC Remote: 2024-09-23 13:31:22.32791117 +0000 UTC m=+91.695981062 (delta=81.906454ms)
	I0923 13:31:22.434638  700346 fix.go:200] guest clock delta is within tolerance: 81.906454ms
	I0923 13:31:22.434646  700346 start.go:83] releasing machines lock for "multinode-851928", held for 1m31.667740609s
	I0923 13:31:22.434674  700346 main.go:141] libmachine: (multinode-851928) Calling .DriverName
	I0923 13:31:22.434963  700346 main.go:141] libmachine: (multinode-851928) Calling .GetIP
	I0923 13:31:22.437863  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:31:22.438337  700346 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:31:22.438370  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:31:22.438549  700346 main.go:141] libmachine: (multinode-851928) Calling .DriverName
	I0923 13:31:22.439075  700346 main.go:141] libmachine: (multinode-851928) Calling .DriverName
	I0923 13:31:22.439265  700346 main.go:141] libmachine: (multinode-851928) Calling .DriverName
	I0923 13:31:22.439367  700346 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 13:31:22.439418  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHHostname
	I0923 13:31:22.439554  700346 ssh_runner.go:195] Run: cat /version.json
	I0923 13:31:22.439578  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHHostname
	I0923 13:31:22.442584  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:31:22.442608  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:31:22.442989  700346 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:31:22.443020  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:31:22.443049  700346 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:31:22.443066  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:31:22.443188  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHPort
	I0923 13:31:22.443290  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHPort
	I0923 13:31:22.443382  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:31:22.443459  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:31:22.443531  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHUsername
	I0923 13:31:22.443589  700346 main.go:141] libmachine: (multinode-851928) Calling .GetSSHUsername
	I0923 13:31:22.443682  700346 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/multinode-851928/id_rsa Username:docker}
	I0923 13:31:22.444020  700346 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/multinode-851928/id_rsa Username:docker}
	I0923 13:31:22.553213  700346 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0923 13:31:22.553953  700346 command_runner.go:130] > {"iso_version": "v1.34.0-1726784654-19672", "kicbase_version": "v0.0.45-1726589491-19662", "minikube_version": "v1.34.0", "commit": "342ed9b49b7fd0c6b2cb4410be5c5d5251f51ed8"}
	I0923 13:31:22.554166  700346 ssh_runner.go:195] Run: systemctl --version
	I0923 13:31:22.560194  700346 command_runner.go:130] > systemd 252 (252)
	I0923 13:31:22.560252  700346 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0923 13:31:22.560444  700346 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 13:31:22.724305  700346 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 13:31:22.730132  700346 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0923 13:31:22.730219  700346 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 13:31:22.730287  700346 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:31:22.740108  700346 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 13:31:22.740149  700346 start.go:495] detecting cgroup driver to use...
	I0923 13:31:22.740221  700346 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 13:31:22.757562  700346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:31:22.772194  700346 docker.go:217] disabling cri-docker service (if available) ...
	I0923 13:31:22.772259  700346 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 13:31:22.786916  700346 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 13:31:22.801506  700346 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 13:31:22.950545  700346 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 13:31:23.087585  700346 docker.go:233] disabling docker service ...
	I0923 13:31:23.087667  700346 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 13:31:23.104900  700346 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 13:31:23.118910  700346 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 13:31:23.259349  700346 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 13:31:23.402378  700346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 13:31:23.416478  700346 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:31:23.436070  700346 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0923 13:31:23.436118  700346 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 13:31:23.436181  700346 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:31:23.447083  700346 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 13:31:23.447167  700346 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:31:23.457730  700346 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:31:23.468411  700346 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:31:23.478917  700346 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 13:31:23.489541  700346 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:31:23.500253  700346 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:31:23.511127  700346 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:31:23.521795  700346 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 13:31:23.531323  700346 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0923 13:31:23.531470  700346 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 13:31:23.540851  700346 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:31:23.689553  700346 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 13:31:23.996997  700346 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 13:31:23.997071  700346 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 13:31:24.001998  700346 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0923 13:31:24.002037  700346 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0923 13:31:24.002052  700346 command_runner.go:130] > Device: 0,22	Inode: 1323        Links: 1
	I0923 13:31:24.002061  700346 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0923 13:31:24.002067  700346 command_runner.go:130] > Access: 2024-09-23 13:31:23.943333650 +0000
	I0923 13:31:24.002079  700346 command_runner.go:130] > Modify: 2024-09-23 13:31:23.846331146 +0000
	I0923 13:31:24.002086  700346 command_runner.go:130] > Change: 2024-09-23 13:31:23.846331146 +0000
	I0923 13:31:24.002092  700346 command_runner.go:130] >  Birth: -
	I0923 13:31:24.002146  700346 start.go:563] Will wait 60s for crictl version
	I0923 13:31:24.002204  700346 ssh_runner.go:195] Run: which crictl
	I0923 13:31:24.005812  700346 command_runner.go:130] > /usr/bin/crictl
	I0923 13:31:24.005921  700346 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 13:31:24.043295  700346 command_runner.go:130] > Version:  0.1.0
	I0923 13:31:24.043330  700346 command_runner.go:130] > RuntimeName:  cri-o
	I0923 13:31:24.043335  700346 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0923 13:31:24.043341  700346 command_runner.go:130] > RuntimeApiVersion:  v1
	I0923 13:31:24.046822  700346 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 13:31:24.046905  700346 ssh_runner.go:195] Run: crio --version
	I0923 13:31:24.075309  700346 command_runner.go:130] > crio version 1.29.1
	I0923 13:31:24.075335  700346 command_runner.go:130] > Version:        1.29.1
	I0923 13:31:24.075340  700346 command_runner.go:130] > GitCommit:      unknown
	I0923 13:31:24.075344  700346 command_runner.go:130] > GitCommitDate:  unknown
	I0923 13:31:24.075348  700346 command_runner.go:130] > GitTreeState:   clean
	I0923 13:31:24.075354  700346 command_runner.go:130] > BuildDate:      2024-09-20T03:55:27Z
	I0923 13:31:24.075358  700346 command_runner.go:130] > GoVersion:      go1.21.6
	I0923 13:31:24.075362  700346 command_runner.go:130] > Compiler:       gc
	I0923 13:31:24.075366  700346 command_runner.go:130] > Platform:       linux/amd64
	I0923 13:31:24.075370  700346 command_runner.go:130] > Linkmode:       dynamic
	I0923 13:31:24.075395  700346 command_runner.go:130] > BuildTags:      
	I0923 13:31:24.075400  700346 command_runner.go:130] >   containers_image_ostree_stub
	I0923 13:31:24.075404  700346 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0923 13:31:24.075407  700346 command_runner.go:130] >   btrfs_noversion
	I0923 13:31:24.075412  700346 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0923 13:31:24.075415  700346 command_runner.go:130] >   libdm_no_deferred_remove
	I0923 13:31:24.075419  700346 command_runner.go:130] >   seccomp
	I0923 13:31:24.075423  700346 command_runner.go:130] > LDFlags:          unknown
	I0923 13:31:24.075430  700346 command_runner.go:130] > SeccompEnabled:   true
	I0923 13:31:24.075434  700346 command_runner.go:130] > AppArmorEnabled:  false
	I0923 13:31:24.076544  700346 ssh_runner.go:195] Run: crio --version
	I0923 13:31:24.104674  700346 command_runner.go:130] > crio version 1.29.1
	I0923 13:31:24.104701  700346 command_runner.go:130] > Version:        1.29.1
	I0923 13:31:24.104707  700346 command_runner.go:130] > GitCommit:      unknown
	I0923 13:31:24.104711  700346 command_runner.go:130] > GitCommitDate:  unknown
	I0923 13:31:24.104715  700346 command_runner.go:130] > GitTreeState:   clean
	I0923 13:31:24.104721  700346 command_runner.go:130] > BuildDate:      2024-09-20T03:55:27Z
	I0923 13:31:24.104725  700346 command_runner.go:130] > GoVersion:      go1.21.6
	I0923 13:31:24.104729  700346 command_runner.go:130] > Compiler:       gc
	I0923 13:31:24.104733  700346 command_runner.go:130] > Platform:       linux/amd64
	I0923 13:31:24.104737  700346 command_runner.go:130] > Linkmode:       dynamic
	I0923 13:31:24.104741  700346 command_runner.go:130] > BuildTags:      
	I0923 13:31:24.104746  700346 command_runner.go:130] >   containers_image_ostree_stub
	I0923 13:31:24.104750  700346 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0923 13:31:24.104753  700346 command_runner.go:130] >   btrfs_noversion
	I0923 13:31:24.104757  700346 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0923 13:31:24.104761  700346 command_runner.go:130] >   libdm_no_deferred_remove
	I0923 13:31:24.104764  700346 command_runner.go:130] >   seccomp
	I0923 13:31:24.104768  700346 command_runner.go:130] > LDFlags:          unknown
	I0923 13:31:24.104772  700346 command_runner.go:130] > SeccompEnabled:   true
	I0923 13:31:24.104776  700346 command_runner.go:130] > AppArmorEnabled:  false
	I0923 13:31:24.107554  700346 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 13:31:24.108884  700346 main.go:141] libmachine: (multinode-851928) Calling .GetIP
	I0923 13:31:24.111857  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:31:24.112232  700346 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:31:24.112272  700346 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:31:24.112473  700346 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 13:31:24.116506  700346 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0923 13:31:24.116652  700346 kubeadm.go:883] updating cluster {Name:multinode-851928 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-851928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.25 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 13:31:24.116869  700346 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 13:31:24.116944  700346 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 13:31:24.157804  700346 command_runner.go:130] > {
	I0923 13:31:24.157856  700346 command_runner.go:130] >   "images": [
	I0923 13:31:24.157863  700346 command_runner.go:130] >     {
	I0923 13:31:24.157875  700346 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0923 13:31:24.157881  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.157895  700346 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0923 13:31:24.157900  700346 command_runner.go:130] >       ],
	I0923 13:31:24.157905  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.157917  700346 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0923 13:31:24.157926  700346 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0923 13:31:24.157931  700346 command_runner.go:130] >       ],
	I0923 13:31:24.157939  700346 command_runner.go:130] >       "size": "87190579",
	I0923 13:31:24.157943  700346 command_runner.go:130] >       "uid": null,
	I0923 13:31:24.157948  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.157958  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.157965  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.157969  700346 command_runner.go:130] >     },
	I0923 13:31:24.157972  700346 command_runner.go:130] >     {
	I0923 13:31:24.157980  700346 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0923 13:31:24.157985  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.157991  700346 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0923 13:31:24.157994  700346 command_runner.go:130] >       ],
	I0923 13:31:24.157999  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.158008  700346 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0923 13:31:24.158018  700346 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0923 13:31:24.158023  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158027  700346 command_runner.go:130] >       "size": "1363676",
	I0923 13:31:24.158031  700346 command_runner.go:130] >       "uid": null,
	I0923 13:31:24.158041  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.158048  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.158059  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.158066  700346 command_runner.go:130] >     },
	I0923 13:31:24.158070  700346 command_runner.go:130] >     {
	I0923 13:31:24.158077  700346 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0923 13:31:24.158082  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.158087  700346 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0923 13:31:24.158092  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158096  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.158103  700346 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0923 13:31:24.158111  700346 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0923 13:31:24.158115  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158122  700346 command_runner.go:130] >       "size": "31470524",
	I0923 13:31:24.158126  700346 command_runner.go:130] >       "uid": null,
	I0923 13:31:24.158131  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.158136  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.158141  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.158147  700346 command_runner.go:130] >     },
	I0923 13:31:24.158151  700346 command_runner.go:130] >     {
	I0923 13:31:24.158160  700346 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0923 13:31:24.158164  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.158171  700346 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0923 13:31:24.158175  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158181  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.158188  700346 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0923 13:31:24.158205  700346 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0923 13:31:24.158212  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158216  700346 command_runner.go:130] >       "size": "63273227",
	I0923 13:31:24.158228  700346 command_runner.go:130] >       "uid": null,
	I0923 13:31:24.158233  700346 command_runner.go:130] >       "username": "nonroot",
	I0923 13:31:24.158238  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.158242  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.158247  700346 command_runner.go:130] >     },
	I0923 13:31:24.158251  700346 command_runner.go:130] >     {
	I0923 13:31:24.158266  700346 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0923 13:31:24.158273  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.158278  700346 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0923 13:31:24.158282  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158291  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.158301  700346 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0923 13:31:24.158309  700346 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0923 13:31:24.158314  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158319  700346 command_runner.go:130] >       "size": "149009664",
	I0923 13:31:24.158325  700346 command_runner.go:130] >       "uid": {
	I0923 13:31:24.158329  700346 command_runner.go:130] >         "value": "0"
	I0923 13:31:24.158333  700346 command_runner.go:130] >       },
	I0923 13:31:24.158337  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.158341  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.158345  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.158349  700346 command_runner.go:130] >     },
	I0923 13:31:24.158356  700346 command_runner.go:130] >     {
	I0923 13:31:24.158365  700346 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0923 13:31:24.158369  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.158375  700346 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0923 13:31:24.158381  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158385  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.158393  700346 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0923 13:31:24.158403  700346 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0923 13:31:24.158407  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158411  700346 command_runner.go:130] >       "size": "95237600",
	I0923 13:31:24.158415  700346 command_runner.go:130] >       "uid": {
	I0923 13:31:24.158418  700346 command_runner.go:130] >         "value": "0"
	I0923 13:31:24.158422  700346 command_runner.go:130] >       },
	I0923 13:31:24.158427  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.158431  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.158437  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.158441  700346 command_runner.go:130] >     },
	I0923 13:31:24.158453  700346 command_runner.go:130] >     {
	I0923 13:31:24.158463  700346 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0923 13:31:24.158470  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.158479  700346 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0923 13:31:24.158487  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158491  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.158499  700346 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0923 13:31:24.158509  700346 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0923 13:31:24.158516  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158520  700346 command_runner.go:130] >       "size": "89437508",
	I0923 13:31:24.158527  700346 command_runner.go:130] >       "uid": {
	I0923 13:31:24.158531  700346 command_runner.go:130] >         "value": "0"
	I0923 13:31:24.158535  700346 command_runner.go:130] >       },
	I0923 13:31:24.158539  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.158543  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.158549  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.158553  700346 command_runner.go:130] >     },
	I0923 13:31:24.158557  700346 command_runner.go:130] >     {
	I0923 13:31:24.158563  700346 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0923 13:31:24.158572  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.158577  700346 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0923 13:31:24.158581  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158585  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.158611  700346 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0923 13:31:24.158626  700346 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0923 13:31:24.158633  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158639  700346 command_runner.go:130] >       "size": "92733849",
	I0923 13:31:24.158649  700346 command_runner.go:130] >       "uid": null,
	I0923 13:31:24.158659  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.158665  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.158673  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.158680  700346 command_runner.go:130] >     },
	I0923 13:31:24.158685  700346 command_runner.go:130] >     {
	I0923 13:31:24.158699  700346 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0923 13:31:24.158703  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.158708  700346 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0923 13:31:24.158712  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158720  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.158732  700346 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0923 13:31:24.158747  700346 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0923 13:31:24.158758  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158766  700346 command_runner.go:130] >       "size": "68420934",
	I0923 13:31:24.158773  700346 command_runner.go:130] >       "uid": {
	I0923 13:31:24.158783  700346 command_runner.go:130] >         "value": "0"
	I0923 13:31:24.158789  700346 command_runner.go:130] >       },
	I0923 13:31:24.158800  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.158810  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.158817  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.158826  700346 command_runner.go:130] >     },
	I0923 13:31:24.158832  700346 command_runner.go:130] >     {
	I0923 13:31:24.158845  700346 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0923 13:31:24.158854  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.158862  700346 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0923 13:31:24.158873  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158880  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.158895  700346 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0923 13:31:24.158916  700346 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0923 13:31:24.158926  700346 command_runner.go:130] >       ],
	I0923 13:31:24.158934  700346 command_runner.go:130] >       "size": "742080",
	I0923 13:31:24.158944  700346 command_runner.go:130] >       "uid": {
	I0923 13:31:24.158952  700346 command_runner.go:130] >         "value": "65535"
	I0923 13:31:24.158962  700346 command_runner.go:130] >       },
	I0923 13:31:24.158970  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.158980  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.158987  700346 command_runner.go:130] >       "pinned": true
	I0923 13:31:24.158997  700346 command_runner.go:130] >     }
	I0923 13:31:24.159015  700346 command_runner.go:130] >   ]
	I0923 13:31:24.159026  700346 command_runner.go:130] > }
	I0923 13:31:24.159299  700346 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 13:31:24.159318  700346 crio.go:433] Images already preloaded, skipping extraction
	I0923 13:31:24.159374  700346 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 13:31:24.192044  700346 command_runner.go:130] > {
	I0923 13:31:24.192081  700346 command_runner.go:130] >   "images": [
	I0923 13:31:24.192089  700346 command_runner.go:130] >     {
	I0923 13:31:24.192102  700346 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0923 13:31:24.192109  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.192120  700346 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0923 13:31:24.192127  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192139  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.192152  700346 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0923 13:31:24.192174  700346 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0923 13:31:24.192185  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192194  700346 command_runner.go:130] >       "size": "87190579",
	I0923 13:31:24.192202  700346 command_runner.go:130] >       "uid": null,
	I0923 13:31:24.192207  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.192234  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.192245  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.192250  700346 command_runner.go:130] >     },
	I0923 13:31:24.192254  700346 command_runner.go:130] >     {
	I0923 13:31:24.192259  700346 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0923 13:31:24.192266  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.192276  700346 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0923 13:31:24.192284  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192289  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.192301  700346 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0923 13:31:24.192308  700346 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0923 13:31:24.192314  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192319  700346 command_runner.go:130] >       "size": "1363676",
	I0923 13:31:24.192326  700346 command_runner.go:130] >       "uid": null,
	I0923 13:31:24.192333  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.192340  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.192344  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.192347  700346 command_runner.go:130] >     },
	I0923 13:31:24.192351  700346 command_runner.go:130] >     {
	I0923 13:31:24.192357  700346 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0923 13:31:24.192364  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.192370  700346 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0923 13:31:24.192374  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192378  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.192387  700346 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0923 13:31:24.192394  700346 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0923 13:31:24.192400  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192404  700346 command_runner.go:130] >       "size": "31470524",
	I0923 13:31:24.192410  700346 command_runner.go:130] >       "uid": null,
	I0923 13:31:24.192416  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.192420  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.192424  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.192430  700346 command_runner.go:130] >     },
	I0923 13:31:24.192433  700346 command_runner.go:130] >     {
	I0923 13:31:24.192439  700346 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0923 13:31:24.192446  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.192451  700346 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0923 13:31:24.192457  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192461  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.192475  700346 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0923 13:31:24.192487  700346 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0923 13:31:24.192494  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192498  700346 command_runner.go:130] >       "size": "63273227",
	I0923 13:31:24.192504  700346 command_runner.go:130] >       "uid": null,
	I0923 13:31:24.192510  700346 command_runner.go:130] >       "username": "nonroot",
	I0923 13:31:24.192522  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.192533  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.192539  700346 command_runner.go:130] >     },
	I0923 13:31:24.192543  700346 command_runner.go:130] >     {
	I0923 13:31:24.192548  700346 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0923 13:31:24.192555  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.192560  700346 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0923 13:31:24.192564  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192570  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.192576  700346 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0923 13:31:24.192585  700346 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0923 13:31:24.192589  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192594  700346 command_runner.go:130] >       "size": "149009664",
	I0923 13:31:24.192597  700346 command_runner.go:130] >       "uid": {
	I0923 13:31:24.192602  700346 command_runner.go:130] >         "value": "0"
	I0923 13:31:24.192605  700346 command_runner.go:130] >       },
	I0923 13:31:24.192611  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.192617  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.192621  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.192625  700346 command_runner.go:130] >     },
	I0923 13:31:24.192631  700346 command_runner.go:130] >     {
	I0923 13:31:24.192638  700346 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0923 13:31:24.192644  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.192649  700346 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0923 13:31:24.192652  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192660  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.192668  700346 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0923 13:31:24.192677  700346 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0923 13:31:24.192681  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192685  700346 command_runner.go:130] >       "size": "95237600",
	I0923 13:31:24.192692  700346 command_runner.go:130] >       "uid": {
	I0923 13:31:24.192696  700346 command_runner.go:130] >         "value": "0"
	I0923 13:31:24.192700  700346 command_runner.go:130] >       },
	I0923 13:31:24.192705  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.192709  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.192713  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.192719  700346 command_runner.go:130] >     },
	I0923 13:31:24.192723  700346 command_runner.go:130] >     {
	I0923 13:31:24.192729  700346 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0923 13:31:24.192736  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.192741  700346 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0923 13:31:24.192747  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192751  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.192761  700346 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0923 13:31:24.192772  700346 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0923 13:31:24.192781  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192786  700346 command_runner.go:130] >       "size": "89437508",
	I0923 13:31:24.192793  700346 command_runner.go:130] >       "uid": {
	I0923 13:31:24.192798  700346 command_runner.go:130] >         "value": "0"
	I0923 13:31:24.192808  700346 command_runner.go:130] >       },
	I0923 13:31:24.192812  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.192816  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.192821  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.192825  700346 command_runner.go:130] >     },
	I0923 13:31:24.192828  700346 command_runner.go:130] >     {
	I0923 13:31:24.192834  700346 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0923 13:31:24.192841  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.192846  700346 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0923 13:31:24.192851  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192856  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.192870  700346 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0923 13:31:24.192880  700346 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0923 13:31:24.192884  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192888  700346 command_runner.go:130] >       "size": "92733849",
	I0923 13:31:24.192894  700346 command_runner.go:130] >       "uid": null,
	I0923 13:31:24.192904  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.192911  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.192925  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.192934  700346 command_runner.go:130] >     },
	I0923 13:31:24.192940  700346 command_runner.go:130] >     {
	I0923 13:31:24.192952  700346 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0923 13:31:24.192963  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.192972  700346 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0923 13:31:24.192985  700346 command_runner.go:130] >       ],
	I0923 13:31:24.192992  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.193004  700346 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0923 13:31:24.193014  700346 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0923 13:31:24.193022  700346 command_runner.go:130] >       ],
	I0923 13:31:24.193026  700346 command_runner.go:130] >       "size": "68420934",
	I0923 13:31:24.193032  700346 command_runner.go:130] >       "uid": {
	I0923 13:31:24.193036  700346 command_runner.go:130] >         "value": "0"
	I0923 13:31:24.193041  700346 command_runner.go:130] >       },
	I0923 13:31:24.193052  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.193059  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.193064  700346 command_runner.go:130] >       "pinned": false
	I0923 13:31:24.193068  700346 command_runner.go:130] >     },
	I0923 13:31:24.193072  700346 command_runner.go:130] >     {
	I0923 13:31:24.193077  700346 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0923 13:31:24.193084  700346 command_runner.go:130] >       "repoTags": [
	I0923 13:31:24.193089  700346 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0923 13:31:24.193093  700346 command_runner.go:130] >       ],
	I0923 13:31:24.193097  700346 command_runner.go:130] >       "repoDigests": [
	I0923 13:31:24.193106  700346 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0923 13:31:24.193116  700346 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0923 13:31:24.193124  700346 command_runner.go:130] >       ],
	I0923 13:31:24.193128  700346 command_runner.go:130] >       "size": "742080",
	I0923 13:31:24.193135  700346 command_runner.go:130] >       "uid": {
	I0923 13:31:24.193138  700346 command_runner.go:130] >         "value": "65535"
	I0923 13:31:24.193142  700346 command_runner.go:130] >       },
	I0923 13:31:24.193147  700346 command_runner.go:130] >       "username": "",
	I0923 13:31:24.193150  700346 command_runner.go:130] >       "spec": null,
	I0923 13:31:24.193155  700346 command_runner.go:130] >       "pinned": true
	I0923 13:31:24.193158  700346 command_runner.go:130] >     }
	I0923 13:31:24.193162  700346 command_runner.go:130] >   ]
	I0923 13:31:24.193165  700346 command_runner.go:130] > }
	I0923 13:31:24.193305  700346 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 13:31:24.193319  700346 cache_images.go:84] Images are preloaded, skipping loading
	I0923 13:31:24.193327  700346 kubeadm.go:934] updating node { 192.168.39.168 8443 v1.31.1 crio true true} ...
	I0923 13:31:24.193435  700346 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-851928 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-851928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 13:31:24.193517  700346 ssh_runner.go:195] Run: crio config
	I0923 13:31:24.240659  700346 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0923 13:31:24.240695  700346 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0923 13:31:24.240707  700346 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0923 13:31:24.240712  700346 command_runner.go:130] > #
	I0923 13:31:24.240722  700346 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0923 13:31:24.240731  700346 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0923 13:31:24.240744  700346 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0923 13:31:24.240795  700346 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0923 13:31:24.240807  700346 command_runner.go:130] > # reload'.
	I0923 13:31:24.240816  700346 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0923 13:31:24.240829  700346 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0923 13:31:24.240845  700346 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0923 13:31:24.240855  700346 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0923 13:31:24.240859  700346 command_runner.go:130] > [crio]
	I0923 13:31:24.240868  700346 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0923 13:31:24.240877  700346 command_runner.go:130] > # containers images, in this directory.
	I0923 13:31:24.240884  700346 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0923 13:31:24.240903  700346 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0923 13:31:24.240915  700346 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0923 13:31:24.240926  700346 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0923 13:31:24.240932  700346 command_runner.go:130] > # imagestore = ""
	I0923 13:31:24.240949  700346 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0923 13:31:24.240963  700346 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0923 13:31:24.240973  700346 command_runner.go:130] > storage_driver = "overlay"
	I0923 13:31:24.240981  700346 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0923 13:31:24.240992  700346 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0923 13:31:24.241002  700346 command_runner.go:130] > storage_option = [
	I0923 13:31:24.241009  700346 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0923 13:31:24.241028  700346 command_runner.go:130] > ]
	I0923 13:31:24.241039  700346 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0923 13:31:24.241050  700346 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0923 13:31:24.241061  700346 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0923 13:31:24.241074  700346 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0923 13:31:24.241088  700346 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0923 13:31:24.241097  700346 command_runner.go:130] > # always happen on a node reboot
	I0923 13:31:24.241105  700346 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0923 13:31:24.241128  700346 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0923 13:31:24.241140  700346 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0923 13:31:24.241147  700346 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0923 13:31:24.241158  700346 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0923 13:31:24.241172  700346 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0923 13:31:24.241186  700346 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0923 13:31:24.241197  700346 command_runner.go:130] > # internal_wipe = true
	I0923 13:31:24.241209  700346 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0923 13:31:24.241221  700346 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0923 13:31:24.241229  700346 command_runner.go:130] > # internal_repair = false
	I0923 13:31:24.241246  700346 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0923 13:31:24.241260  700346 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0923 13:31:24.241271  700346 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0923 13:31:24.241287  700346 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0923 13:31:24.241299  700346 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0923 13:31:24.241308  700346 command_runner.go:130] > [crio.api]
	I0923 13:31:24.241317  700346 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0923 13:31:24.241328  700346 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0923 13:31:24.241342  700346 command_runner.go:130] > # IP address on which the stream server will listen.
	I0923 13:31:24.241348  700346 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0923 13:31:24.241362  700346 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0923 13:31:24.241373  700346 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0923 13:31:24.241379  700346 command_runner.go:130] > # stream_port = "0"
	I0923 13:31:24.241391  700346 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0923 13:31:24.241403  700346 command_runner.go:130] > # stream_enable_tls = false
	I0923 13:31:24.241427  700346 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0923 13:31:24.241443  700346 command_runner.go:130] > # stream_idle_timeout = ""
	I0923 13:31:24.241454  700346 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0923 13:31:24.241466  700346 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0923 13:31:24.241474  700346 command_runner.go:130] > # minutes.
	I0923 13:31:24.241486  700346 command_runner.go:130] > # stream_tls_cert = ""
	I0923 13:31:24.241498  700346 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0923 13:31:24.241511  700346 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0923 13:31:24.241517  700346 command_runner.go:130] > # stream_tls_key = ""
	I0923 13:31:24.241526  700346 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0923 13:31:24.241538  700346 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0923 13:31:24.241568  700346 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0923 13:31:24.241580  700346 command_runner.go:130] > # stream_tls_ca = ""
	I0923 13:31:24.241592  700346 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0923 13:31:24.241601  700346 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0923 13:31:24.241612  700346 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0923 13:31:24.241621  700346 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0923 13:31:24.241631  700346 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0923 13:31:24.241643  700346 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0923 13:31:24.241650  700346 command_runner.go:130] > [crio.runtime]
	I0923 13:31:24.241661  700346 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0923 13:31:24.241672  700346 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0923 13:31:24.241678  700346 command_runner.go:130] > # "nofile=1024:2048"
	I0923 13:31:24.241693  700346 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0923 13:31:24.241700  700346 command_runner.go:130] > # default_ulimits = [
	I0923 13:31:24.241705  700346 command_runner.go:130] > # ]
	I0923 13:31:24.241714  700346 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0923 13:31:24.241725  700346 command_runner.go:130] > # no_pivot = false
	I0923 13:31:24.241734  700346 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0923 13:31:24.241744  700346 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0923 13:31:24.241751  700346 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0923 13:31:24.241767  700346 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0923 13:31:24.241779  700346 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0923 13:31:24.241800  700346 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0923 13:31:24.241811  700346 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0923 13:31:24.241818  700346 command_runner.go:130] > # Cgroup setting for conmon
	I0923 13:31:24.241849  700346 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0923 13:31:24.241858  700346 command_runner.go:130] > conmon_cgroup = "pod"
	I0923 13:31:24.241872  700346 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0923 13:31:24.241881  700346 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0923 13:31:24.241894  700346 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0923 13:31:24.241903  700346 command_runner.go:130] > conmon_env = [
	I0923 13:31:24.241915  700346 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0923 13:31:24.241925  700346 command_runner.go:130] > ]
	I0923 13:31:24.241934  700346 command_runner.go:130] > # Additional environment variables to set for all the
	I0923 13:31:24.241945  700346 command_runner.go:130] > # containers. These are overridden if set in the
	I0923 13:31:24.241954  700346 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0923 13:31:24.241962  700346 command_runner.go:130] > # default_env = [
	I0923 13:31:24.241968  700346 command_runner.go:130] > # ]
	I0923 13:31:24.241979  700346 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0923 13:31:24.241998  700346 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0923 13:31:24.242006  700346 command_runner.go:130] > # selinux = false
	I0923 13:31:24.242016  700346 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0923 13:31:24.242031  700346 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0923 13:31:24.242044  700346 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0923 13:31:24.242050  700346 command_runner.go:130] > # seccomp_profile = ""
	I0923 13:31:24.242063  700346 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0923 13:31:24.242074  700346 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0923 13:31:24.242086  700346 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0923 13:31:24.242094  700346 command_runner.go:130] > # which might increase security.
	I0923 13:31:24.242103  700346 command_runner.go:130] > # This option is currently deprecated,
	I0923 13:31:24.242113  700346 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0923 13:31:24.242124  700346 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0923 13:31:24.242136  700346 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0923 13:31:24.242147  700346 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0923 13:31:24.242161  700346 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0923 13:31:24.242185  700346 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0923 13:31:24.242198  700346 command_runner.go:130] > # This option supports live configuration reload.
	I0923 13:31:24.242211  700346 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0923 13:31:24.242221  700346 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0923 13:31:24.242227  700346 command_runner.go:130] > # the cgroup blockio controller.
	I0923 13:31:24.242236  700346 command_runner.go:130] > # blockio_config_file = ""
	I0923 13:31:24.242245  700346 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0923 13:31:24.242254  700346 command_runner.go:130] > # blockio parameters.
	I0923 13:31:24.242260  700346 command_runner.go:130] > # blockio_reload = false
	I0923 13:31:24.242272  700346 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0923 13:31:24.242281  700346 command_runner.go:130] > # irqbalance daemon.
	I0923 13:31:24.242288  700346 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0923 13:31:24.242300  700346 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0923 13:31:24.242311  700346 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0923 13:31:24.242324  700346 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0923 13:31:24.242333  700346 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0923 13:31:24.242348  700346 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0923 13:31:24.242359  700346 command_runner.go:130] > # This option supports live configuration reload.
	I0923 13:31:24.242368  700346 command_runner.go:130] > # rdt_config_file = ""
	I0923 13:31:24.242376  700346 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0923 13:31:24.242385  700346 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0923 13:31:24.242430  700346 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0923 13:31:24.242441  700346 command_runner.go:130] > # separate_pull_cgroup = ""
	I0923 13:31:24.242451  700346 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0923 13:31:24.242463  700346 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0923 13:31:24.242473  700346 command_runner.go:130] > # will be added.
	I0923 13:31:24.242484  700346 command_runner.go:130] > # default_capabilities = [
	I0923 13:31:24.242493  700346 command_runner.go:130] > # 	"CHOWN",
	I0923 13:31:24.242499  700346 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0923 13:31:24.242510  700346 command_runner.go:130] > # 	"FSETID",
	I0923 13:31:24.242516  700346 command_runner.go:130] > # 	"FOWNER",
	I0923 13:31:24.242524  700346 command_runner.go:130] > # 	"SETGID",
	I0923 13:31:24.242529  700346 command_runner.go:130] > # 	"SETUID",
	I0923 13:31:24.242546  700346 command_runner.go:130] > # 	"SETPCAP",
	I0923 13:31:24.242555  700346 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0923 13:31:24.242561  700346 command_runner.go:130] > # 	"KILL",
	I0923 13:31:24.242567  700346 command_runner.go:130] > # ]
	I0923 13:31:24.242579  700346 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0923 13:31:24.242590  700346 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0923 13:31:24.242601  700346 command_runner.go:130] > # add_inheritable_capabilities = false
	I0923 13:31:24.242610  700346 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0923 13:31:24.242622  700346 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0923 13:31:24.242631  700346 command_runner.go:130] > default_sysctls = [
	I0923 13:31:24.242642  700346 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0923 13:31:24.242650  700346 command_runner.go:130] > ]
	I0923 13:31:24.242657  700346 command_runner.go:130] > # List of devices on the host that a
	I0923 13:31:24.242670  700346 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0923 13:31:24.242678  700346 command_runner.go:130] > # allowed_devices = [
	I0923 13:31:24.242684  700346 command_runner.go:130] > # 	"/dev/fuse",
	I0923 13:31:24.242692  700346 command_runner.go:130] > # ]
	I0923 13:31:24.242701  700346 command_runner.go:130] > # List of additional devices. specified as
	I0923 13:31:24.242714  700346 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0923 13:31:24.242727  700346 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0923 13:31:24.242739  700346 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0923 13:31:24.242745  700346 command_runner.go:130] > # additional_devices = [
	I0923 13:31:24.242759  700346 command_runner.go:130] > # ]
	I0923 13:31:24.242771  700346 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0923 13:31:24.242777  700346 command_runner.go:130] > # cdi_spec_dirs = [
	I0923 13:31:24.242783  700346 command_runner.go:130] > # 	"/etc/cdi",
	I0923 13:31:24.242793  700346 command_runner.go:130] > # 	"/var/run/cdi",
	I0923 13:31:24.242798  700346 command_runner.go:130] > # ]
	I0923 13:31:24.242812  700346 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0923 13:31:24.242824  700346 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0923 13:31:24.242837  700346 command_runner.go:130] > # Defaults to false.
	I0923 13:31:24.242844  700346 command_runner.go:130] > # device_ownership_from_security_context = false
	I0923 13:31:24.242857  700346 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0923 13:31:24.242876  700346 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0923 13:31:24.242885  700346 command_runner.go:130] > # hooks_dir = [
	I0923 13:31:24.242893  700346 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0923 13:31:24.242900  700346 command_runner.go:130] > # ]
	I0923 13:31:24.242909  700346 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0923 13:31:24.242921  700346 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0923 13:31:24.242932  700346 command_runner.go:130] > # its default mounts from the following two files:
	I0923 13:31:24.242941  700346 command_runner.go:130] > #
	I0923 13:31:24.242950  700346 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0923 13:31:24.242962  700346 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0923 13:31:24.242975  700346 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0923 13:31:24.242984  700346 command_runner.go:130] > #
	I0923 13:31:24.242993  700346 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0923 13:31:24.243005  700346 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0923 13:31:24.243019  700346 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0923 13:31:24.243028  700346 command_runner.go:130] > #      only add mounts it finds in this file.
	I0923 13:31:24.243036  700346 command_runner.go:130] > #
	I0923 13:31:24.243043  700346 command_runner.go:130] > # default_mounts_file = ""
	I0923 13:31:24.243054  700346 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0923 13:31:24.243071  700346 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0923 13:31:24.243080  700346 command_runner.go:130] > pids_limit = 1024
	I0923 13:31:24.243091  700346 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0923 13:31:24.243104  700346 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0923 13:31:24.243117  700346 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0923 13:31:24.243129  700346 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0923 13:31:24.243138  700346 command_runner.go:130] > # log_size_max = -1
	I0923 13:31:24.243149  700346 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0923 13:31:24.243158  700346 command_runner.go:130] > # log_to_journald = false
	I0923 13:31:24.243167  700346 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0923 13:31:24.243178  700346 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0923 13:31:24.243185  700346 command_runner.go:130] > # Path to directory for container attach sockets.
	I0923 13:31:24.243196  700346 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0923 13:31:24.243205  700346 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0923 13:31:24.243221  700346 command_runner.go:130] > # bind_mount_prefix = ""
	I0923 13:31:24.243233  700346 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0923 13:31:24.243244  700346 command_runner.go:130] > # read_only = false
	I0923 13:31:24.243254  700346 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0923 13:31:24.243266  700346 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0923 13:31:24.243275  700346 command_runner.go:130] > # live configuration reload.
	I0923 13:31:24.243281  700346 command_runner.go:130] > # log_level = "info"
	I0923 13:31:24.243292  700346 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0923 13:31:24.243302  700346 command_runner.go:130] > # This option supports live configuration reload.
	I0923 13:31:24.243311  700346 command_runner.go:130] > # log_filter = ""
	I0923 13:31:24.243320  700346 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0923 13:31:24.243332  700346 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0923 13:31:24.243341  700346 command_runner.go:130] > # separated by comma.
	I0923 13:31:24.243351  700346 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0923 13:31:24.243361  700346 command_runner.go:130] > # uid_mappings = ""
	I0923 13:31:24.243369  700346 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0923 13:31:24.243382  700346 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0923 13:31:24.243389  700346 command_runner.go:130] > # separated by comma.
	I0923 13:31:24.243401  700346 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0923 13:31:24.243410  700346 command_runner.go:130] > # gid_mappings = ""
	I0923 13:31:24.243420  700346 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0923 13:31:24.243435  700346 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0923 13:31:24.243451  700346 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0923 13:31:24.243466  700346 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0923 13:31:24.243490  700346 command_runner.go:130] > # minimum_mappable_uid = -1
	I0923 13:31:24.243503  700346 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0923 13:31:24.243516  700346 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0923 13:31:24.243528  700346 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0923 13:31:24.243540  700346 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0923 13:31:24.243550  700346 command_runner.go:130] > # minimum_mappable_gid = -1
	I0923 13:31:24.243558  700346 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0923 13:31:24.243568  700346 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0923 13:31:24.243578  700346 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0923 13:31:24.243594  700346 command_runner.go:130] > # ctr_stop_timeout = 30
	I0923 13:31:24.243606  700346 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0923 13:31:24.243621  700346 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0923 13:31:24.243633  700346 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0923 13:31:24.243643  700346 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0923 13:31:24.243649  700346 command_runner.go:130] > drop_infra_ctr = false
	I0923 13:31:24.243661  700346 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0923 13:31:24.243673  700346 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0923 13:31:24.243685  700346 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0923 13:31:24.243694  700346 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0923 13:31:24.243706  700346 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0923 13:31:24.243718  700346 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0923 13:31:24.243732  700346 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0923 13:31:24.243743  700346 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0923 13:31:24.243753  700346 command_runner.go:130] > # shared_cpuset = ""
	I0923 13:31:24.243765  700346 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0923 13:31:24.243774  700346 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0923 13:31:24.243785  700346 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0923 13:31:24.243797  700346 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0923 13:31:24.243806  700346 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0923 13:31:24.243816  700346 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0923 13:31:24.243829  700346 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0923 13:31:24.243838  700346 command_runner.go:130] > # enable_criu_support = false
	I0923 13:31:24.243847  700346 command_runner.go:130] > # Enable/disable the generation of the container,
	I0923 13:31:24.243864  700346 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0923 13:31:24.243874  700346 command_runner.go:130] > # enable_pod_events = false
	I0923 13:31:24.243884  700346 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0923 13:31:24.243897  700346 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0923 13:31:24.243906  700346 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0923 13:31:24.243915  700346 command_runner.go:130] > # default_runtime = "runc"
	I0923 13:31:24.243923  700346 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0923 13:31:24.243937  700346 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0923 13:31:24.243958  700346 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0923 13:31:24.243975  700346 command_runner.go:130] > # creation as a file is not desired either.
	I0923 13:31:24.243990  700346 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0923 13:31:24.244002  700346 command_runner.go:130] > # the hostname is being managed dynamically.
	I0923 13:31:24.244010  700346 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0923 13:31:24.244017  700346 command_runner.go:130] > # ]
	I0923 13:31:24.244027  700346 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0923 13:31:24.244039  700346 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0923 13:31:24.244051  700346 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0923 13:31:24.244063  700346 command_runner.go:130] > # Each entry in the table should follow the format:
	I0923 13:31:24.244071  700346 command_runner.go:130] > #
	I0923 13:31:24.244078  700346 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0923 13:31:24.244088  700346 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0923 13:31:24.244140  700346 command_runner.go:130] > # runtime_type = "oci"
	I0923 13:31:24.244152  700346 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0923 13:31:24.244159  700346 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0923 13:31:24.244169  700346 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0923 13:31:24.244180  700346 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0923 13:31:24.244190  700346 command_runner.go:130] > # monitor_env = []
	I0923 13:31:24.244198  700346 command_runner.go:130] > # privileged_without_host_devices = false
	I0923 13:31:24.244207  700346 command_runner.go:130] > # allowed_annotations = []
	I0923 13:31:24.244215  700346 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0923 13:31:24.244223  700346 command_runner.go:130] > # Where:
	I0923 13:31:24.244232  700346 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0923 13:31:24.244244  700346 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0923 13:31:24.244260  700346 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0923 13:31:24.244271  700346 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0923 13:31:24.244280  700346 command_runner.go:130] > #   in $PATH.
	I0923 13:31:24.244290  700346 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0923 13:31:24.244300  700346 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0923 13:31:24.244315  700346 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0923 13:31:24.244324  700346 command_runner.go:130] > #   state.
	I0923 13:31:24.244333  700346 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0923 13:31:24.244345  700346 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0923 13:31:24.244360  700346 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0923 13:31:24.244372  700346 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0923 13:31:24.244384  700346 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0923 13:31:24.244397  700346 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0923 13:31:24.244407  700346 command_runner.go:130] > #   The currently recognized values are:
	I0923 13:31:24.244417  700346 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0923 13:31:24.244431  700346 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0923 13:31:24.244440  700346 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0923 13:31:24.244450  700346 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0923 13:31:24.244457  700346 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0923 13:31:24.244465  700346 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0923 13:31:24.244471  700346 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0923 13:31:24.244478  700346 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0923 13:31:24.244489  700346 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0923 13:31:24.244495  700346 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0923 13:31:24.244502  700346 command_runner.go:130] > #   deprecated option "conmon".
	I0923 13:31:24.244511  700346 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0923 13:31:24.244519  700346 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0923 13:31:24.244525  700346 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0923 13:31:24.244532  700346 command_runner.go:130] > #   should be moved to the container's cgroup
	I0923 13:31:24.244538  700346 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0923 13:31:24.244545  700346 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0923 13:31:24.244552  700346 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0923 13:31:24.244561  700346 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0923 13:31:24.244566  700346 command_runner.go:130] > #
	I0923 13:31:24.244571  700346 command_runner.go:130] > # Using the seccomp notifier feature:
	I0923 13:31:24.244574  700346 command_runner.go:130] > #
	I0923 13:31:24.244580  700346 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0923 13:31:24.244588  700346 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0923 13:31:24.244592  700346 command_runner.go:130] > #
	I0923 13:31:24.244602  700346 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0923 13:31:24.244610  700346 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0923 13:31:24.244614  700346 command_runner.go:130] > #
	I0923 13:31:24.244621  700346 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0923 13:31:24.244627  700346 command_runner.go:130] > # feature.
	I0923 13:31:24.244630  700346 command_runner.go:130] > #
	I0923 13:31:24.244636  700346 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0923 13:31:24.244644  700346 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0923 13:31:24.244650  700346 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0923 13:31:24.244658  700346 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0923 13:31:24.244664  700346 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0923 13:31:24.244669  700346 command_runner.go:130] > #
	I0923 13:31:24.244674  700346 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0923 13:31:24.244680  700346 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0923 13:31:24.244685  700346 command_runner.go:130] > #
	I0923 13:31:24.244690  700346 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0923 13:31:24.244697  700346 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0923 13:31:24.244701  700346 command_runner.go:130] > #
	I0923 13:31:24.244707  700346 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0923 13:31:24.244714  700346 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0923 13:31:24.244718  700346 command_runner.go:130] > # limitation.
	I0923 13:31:24.244724  700346 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0923 13:31:24.244728  700346 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0923 13:31:24.244733  700346 command_runner.go:130] > runtime_type = "oci"
	I0923 13:31:24.244737  700346 command_runner.go:130] > runtime_root = "/run/runc"
	I0923 13:31:24.244744  700346 command_runner.go:130] > runtime_config_path = ""
	I0923 13:31:24.244748  700346 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0923 13:31:24.244752  700346 command_runner.go:130] > monitor_cgroup = "pod"
	I0923 13:31:24.244756  700346 command_runner.go:130] > monitor_exec_cgroup = ""
	I0923 13:31:24.244760  700346 command_runner.go:130] > monitor_env = [
	I0923 13:31:24.244765  700346 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0923 13:31:24.244770  700346 command_runner.go:130] > ]
	I0923 13:31:24.244775  700346 command_runner.go:130] > privileged_without_host_devices = false
	I0923 13:31:24.244781  700346 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0923 13:31:24.244787  700346 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0923 13:31:24.244793  700346 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0923 13:31:24.244802  700346 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0923 13:31:24.244811  700346 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0923 13:31:24.244816  700346 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0923 13:31:24.244830  700346 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0923 13:31:24.244839  700346 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0923 13:31:24.244844  700346 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0923 13:31:24.244853  700346 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0923 13:31:24.244859  700346 command_runner.go:130] > # Example:
	I0923 13:31:24.244863  700346 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0923 13:31:24.244868  700346 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0923 13:31:24.244875  700346 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0923 13:31:24.244880  700346 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0923 13:31:24.244883  700346 command_runner.go:130] > # cpuset = 0
	I0923 13:31:24.244889  700346 command_runner.go:130] > # cpushares = "0-1"
	I0923 13:31:24.244892  700346 command_runner.go:130] > # Where:
	I0923 13:31:24.244901  700346 command_runner.go:130] > # The workload name is workload-type.
	I0923 13:31:24.244916  700346 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0923 13:31:24.244927  700346 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0923 13:31:24.244936  700346 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0923 13:31:24.244949  700346 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0923 13:31:24.244961  700346 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0923 13:31:24.244970  700346 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0923 13:31:24.244982  700346 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0923 13:31:24.244991  700346 command_runner.go:130] > # Default value is set to true
	I0923 13:31:24.244998  700346 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0923 13:31:24.245008  700346 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0923 13:31:24.245017  700346 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0923 13:31:24.245021  700346 command_runner.go:130] > # Default value is set to 'false'
	I0923 13:31:24.245026  700346 command_runner.go:130] > # disable_hostport_mapping = false
	I0923 13:31:24.245033  700346 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0923 13:31:24.245038  700346 command_runner.go:130] > #
	I0923 13:31:24.245044  700346 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0923 13:31:24.245050  700346 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0923 13:31:24.245059  700346 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0923 13:31:24.245066  700346 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0923 13:31:24.245071  700346 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0923 13:31:24.245074  700346 command_runner.go:130] > [crio.image]
	I0923 13:31:24.245080  700346 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0923 13:31:24.245083  700346 command_runner.go:130] > # default_transport = "docker://"
	I0923 13:31:24.245092  700346 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0923 13:31:24.245098  700346 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0923 13:31:24.245102  700346 command_runner.go:130] > # global_auth_file = ""
	I0923 13:31:24.245106  700346 command_runner.go:130] > # The image used to instantiate infra containers.
	I0923 13:31:24.245111  700346 command_runner.go:130] > # This option supports live configuration reload.
	I0923 13:31:24.245115  700346 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0923 13:31:24.245121  700346 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0923 13:31:24.245126  700346 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0923 13:31:24.245131  700346 command_runner.go:130] > # This option supports live configuration reload.
	I0923 13:31:24.245135  700346 command_runner.go:130] > # pause_image_auth_file = ""
	I0923 13:31:24.245140  700346 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0923 13:31:24.245146  700346 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0923 13:31:24.245152  700346 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0923 13:31:24.245157  700346 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0923 13:31:24.245161  700346 command_runner.go:130] > # pause_command = "/pause"
	I0923 13:31:24.245166  700346 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0923 13:31:24.245173  700346 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0923 13:31:24.245178  700346 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0923 13:31:24.245185  700346 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0923 13:31:24.245191  700346 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0923 13:31:24.245197  700346 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0923 13:31:24.245200  700346 command_runner.go:130] > # pinned_images = [
	I0923 13:31:24.245203  700346 command_runner.go:130] > # ]
	I0923 13:31:24.245209  700346 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0923 13:31:24.245214  700346 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0923 13:31:24.245220  700346 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0923 13:31:24.245225  700346 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0923 13:31:24.245232  700346 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0923 13:31:24.245237  700346 command_runner.go:130] > # signature_policy = ""
	I0923 13:31:24.245242  700346 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0923 13:31:24.245250  700346 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0923 13:31:24.245256  700346 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0923 13:31:24.245262  700346 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0923 13:31:24.245267  700346 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0923 13:31:24.245272  700346 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0923 13:31:24.245278  700346 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0923 13:31:24.245286  700346 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0923 13:31:24.245293  700346 command_runner.go:130] > # changing them here.
	I0923 13:31:24.245297  700346 command_runner.go:130] > # insecure_registries = [
	I0923 13:31:24.245300  700346 command_runner.go:130] > # ]
	I0923 13:31:24.245309  700346 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0923 13:31:24.245313  700346 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0923 13:31:24.245318  700346 command_runner.go:130] > # image_volumes = "mkdir"
	I0923 13:31:24.245324  700346 command_runner.go:130] > # Temporary directory to use for storing big files
	I0923 13:31:24.245330  700346 command_runner.go:130] > # big_files_temporary_dir = ""
	I0923 13:31:24.245336  700346 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0923 13:31:24.245342  700346 command_runner.go:130] > # CNI plugins.
	I0923 13:31:24.245346  700346 command_runner.go:130] > [crio.network]
	I0923 13:31:24.245353  700346 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0923 13:31:24.245361  700346 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0923 13:31:24.245365  700346 command_runner.go:130] > # cni_default_network = ""
	I0923 13:31:24.245370  700346 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0923 13:31:24.245375  700346 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0923 13:31:24.245382  700346 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0923 13:31:24.245388  700346 command_runner.go:130] > # plugin_dirs = [
	I0923 13:31:24.245391  700346 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0923 13:31:24.245396  700346 command_runner.go:130] > # ]
	I0923 13:31:24.245402  700346 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0923 13:31:24.245407  700346 command_runner.go:130] > [crio.metrics]
	I0923 13:31:24.245412  700346 command_runner.go:130] > # Globally enable or disable metrics support.
	I0923 13:31:24.245418  700346 command_runner.go:130] > enable_metrics = true
	I0923 13:31:24.245423  700346 command_runner.go:130] > # Specify enabled metrics collectors.
	I0923 13:31:24.245429  700346 command_runner.go:130] > # Per default all metrics are enabled.
	I0923 13:31:24.245435  700346 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0923 13:31:24.245443  700346 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0923 13:31:24.245448  700346 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0923 13:31:24.245454  700346 command_runner.go:130] > # metrics_collectors = [
	I0923 13:31:24.245458  700346 command_runner.go:130] > # 	"operations",
	I0923 13:31:24.245462  700346 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0923 13:31:24.245467  700346 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0923 13:31:24.245471  700346 command_runner.go:130] > # 	"operations_errors",
	I0923 13:31:24.245475  700346 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0923 13:31:24.245484  700346 command_runner.go:130] > # 	"image_pulls_by_name",
	I0923 13:31:24.245490  700346 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0923 13:31:24.245494  700346 command_runner.go:130] > # 	"image_pulls_failures",
	I0923 13:31:24.245498  700346 command_runner.go:130] > # 	"image_pulls_successes",
	I0923 13:31:24.245502  700346 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0923 13:31:24.245506  700346 command_runner.go:130] > # 	"image_layer_reuse",
	I0923 13:31:24.245510  700346 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0923 13:31:24.245517  700346 command_runner.go:130] > # 	"containers_oom_total",
	I0923 13:31:24.245522  700346 command_runner.go:130] > # 	"containers_oom",
	I0923 13:31:24.245526  700346 command_runner.go:130] > # 	"processes_defunct",
	I0923 13:31:24.245530  700346 command_runner.go:130] > # 	"operations_total",
	I0923 13:31:24.245534  700346 command_runner.go:130] > # 	"operations_latency_seconds",
	I0923 13:31:24.245538  700346 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0923 13:31:24.245542  700346 command_runner.go:130] > # 	"operations_errors_total",
	I0923 13:31:24.245546  700346 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0923 13:31:24.245551  700346 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0923 13:31:24.245557  700346 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0923 13:31:24.245561  700346 command_runner.go:130] > # 	"image_pulls_success_total",
	I0923 13:31:24.245565  700346 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0923 13:31:24.245569  700346 command_runner.go:130] > # 	"containers_oom_count_total",
	I0923 13:31:24.245574  700346 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0923 13:31:24.245581  700346 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0923 13:31:24.245584  700346 command_runner.go:130] > # ]
	I0923 13:31:24.245589  700346 command_runner.go:130] > # The port on which the metrics server will listen.
	I0923 13:31:24.245595  700346 command_runner.go:130] > # metrics_port = 9090
	I0923 13:31:24.245600  700346 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0923 13:31:24.245606  700346 command_runner.go:130] > # metrics_socket = ""
	I0923 13:31:24.245611  700346 command_runner.go:130] > # The certificate for the secure metrics server.
	I0923 13:31:24.245618  700346 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0923 13:31:24.245624  700346 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0923 13:31:24.245630  700346 command_runner.go:130] > # certificate on any modification event.
	I0923 13:31:24.245634  700346 command_runner.go:130] > # metrics_cert = ""
	I0923 13:31:24.245641  700346 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0923 13:31:24.245647  700346 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0923 13:31:24.245651  700346 command_runner.go:130] > # metrics_key = ""
	I0923 13:31:24.245657  700346 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0923 13:31:24.245661  700346 command_runner.go:130] > [crio.tracing]
	I0923 13:31:24.245666  700346 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0923 13:31:24.245671  700346 command_runner.go:130] > # enable_tracing = false
	I0923 13:31:24.245676  700346 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0923 13:31:24.245682  700346 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0923 13:31:24.245688  700346 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0923 13:31:24.245693  700346 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0923 13:31:24.245699  700346 command_runner.go:130] > # CRI-O NRI configuration.
	I0923 13:31:24.245703  700346 command_runner.go:130] > [crio.nri]
	I0923 13:31:24.245707  700346 command_runner.go:130] > # Globally enable or disable NRI.
	I0923 13:31:24.245711  700346 command_runner.go:130] > # enable_nri = false
	I0923 13:31:24.245715  700346 command_runner.go:130] > # NRI socket to listen on.
	I0923 13:31:24.245719  700346 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0923 13:31:24.245723  700346 command_runner.go:130] > # NRI plugin directory to use.
	I0923 13:31:24.245728  700346 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0923 13:31:24.245735  700346 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0923 13:31:24.245742  700346 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0923 13:31:24.245748  700346 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0923 13:31:24.245755  700346 command_runner.go:130] > # nri_disable_connections = false
	I0923 13:31:24.245760  700346 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0923 13:31:24.245766  700346 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0923 13:31:24.245771  700346 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0923 13:31:24.245775  700346 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0923 13:31:24.245780  700346 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0923 13:31:24.245786  700346 command_runner.go:130] > [crio.stats]
	I0923 13:31:24.245791  700346 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0923 13:31:24.245798  700346 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0923 13:31:24.245802  700346 command_runner.go:130] > # stats_collection_period = 0
	I0923 13:31:24.246277  700346 command_runner.go:130] ! time="2024-09-23 13:31:24.206836088Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0923 13:31:24.246306  700346 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0923 13:31:24.246411  700346 cni.go:84] Creating CNI manager for ""
	I0923 13:31:24.246428  700346 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0923 13:31:24.246440  700346 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 13:31:24.246476  700346 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.168 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-851928 NodeName:multinode-851928 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 13:31:24.246628  700346 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-851928"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.168
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.168"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 13:31:24.246695  700346 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 13:31:24.257427  700346 command_runner.go:130] > kubeadm
	I0923 13:31:24.257452  700346 command_runner.go:130] > kubectl
	I0923 13:31:24.257457  700346 command_runner.go:130] > kubelet
	I0923 13:31:24.257482  700346 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 13:31:24.257552  700346 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 13:31:24.267759  700346 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0923 13:31:24.284841  700346 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 13:31:24.301903  700346 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0923 13:31:24.318354  700346 ssh_runner.go:195] Run: grep 192.168.39.168	control-plane.minikube.internal$ /etc/hosts
	I0923 13:31:24.322165  700346 command_runner.go:130] > 192.168.39.168	control-plane.minikube.internal
	I0923 13:31:24.322268  700346 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:31:24.464275  700346 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:31:24.479388  700346 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/multinode-851928 for IP: 192.168.39.168
	I0923 13:31:24.479419  700346 certs.go:194] generating shared ca certs ...
	I0923 13:31:24.479438  700346 certs.go:226] acquiring lock for ca certs: {Name:mk5f47b34d40554f07f6507fea971236e4735d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:31:24.479622  700346 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key
	I0923 13:31:24.479659  700346 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key
	I0923 13:31:24.479668  700346 certs.go:256] generating profile certs ...
	I0923 13:31:24.479763  700346 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/multinode-851928/client.key
	I0923 13:31:24.479835  700346 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/multinode-851928/apiserver.key.897c86c7
	I0923 13:31:24.479869  700346 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/multinode-851928/proxy-client.key
	I0923 13:31:24.479881  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 13:31:24.479899  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 13:31:24.479912  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 13:31:24.479922  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 13:31:24.479934  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/multinode-851928/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 13:31:24.479947  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/multinode-851928/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 13:31:24.479959  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/multinode-851928/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 13:31:24.479970  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/multinode-851928/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 13:31:24.480019  700346 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem (1338 bytes)
	W0923 13:31:24.480047  700346 certs.go:480] ignoring /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447_empty.pem, impossibly tiny 0 bytes
	I0923 13:31:24.480056  700346 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 13:31:24.480077  700346 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem (1082 bytes)
	I0923 13:31:24.480101  700346 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem (1123 bytes)
	I0923 13:31:24.480124  700346 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem (1675 bytes)
	I0923 13:31:24.480161  700346 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 13:31:24.480191  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:31:24.480203  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem -> /usr/share/ca-certificates/669447.pem
	I0923 13:31:24.480213  700346 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> /usr/share/ca-certificates/6694472.pem
	I0923 13:31:24.480839  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 13:31:24.506795  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 13:31:24.532921  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 13:31:24.558600  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 13:31:24.584744  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/multinode-851928/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0923 13:31:24.612111  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/multinode-851928/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 13:31:24.636842  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/multinode-851928/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 13:31:24.660958  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/multinode-851928/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 13:31:24.685304  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 13:31:24.710049  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem --> /usr/share/ca-certificates/669447.pem (1338 bytes)
	I0923 13:31:24.734431  700346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /usr/share/ca-certificates/6694472.pem (1708 bytes)
	I0923 13:31:24.760047  700346 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 13:31:24.777922  700346 ssh_runner.go:195] Run: openssl version
	I0923 13:31:24.783832  700346 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0923 13:31:24.783927  700346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 13:31:24.794949  700346 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:31:24.799148  700346 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 23 12:28 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:31:24.799236  700346 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 12:28 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:31:24.799337  700346 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:31:24.804750  700346 command_runner.go:130] > b5213941
	I0923 13:31:24.804850  700346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 13:31:24.814371  700346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669447.pem && ln -fs /usr/share/ca-certificates/669447.pem /etc/ssl/certs/669447.pem"
	I0923 13:31:24.825128  700346 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669447.pem
	I0923 13:31:24.829436  700346 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 23 12:47 /usr/share/ca-certificates/669447.pem
	I0923 13:31:24.829475  700346 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 12:47 /usr/share/ca-certificates/669447.pem
	I0923 13:31:24.829516  700346 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669447.pem
	I0923 13:31:24.834928  700346 command_runner.go:130] > 51391683
	I0923 13:31:24.835037  700346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/669447.pem /etc/ssl/certs/51391683.0"
	I0923 13:31:24.844672  700346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6694472.pem && ln -fs /usr/share/ca-certificates/6694472.pem /etc/ssl/certs/6694472.pem"
	I0923 13:31:24.855224  700346 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6694472.pem
	I0923 13:31:24.859605  700346 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 23 12:47 /usr/share/ca-certificates/6694472.pem
	I0923 13:31:24.859728  700346 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 12:47 /usr/share/ca-certificates/6694472.pem
	I0923 13:31:24.859785  700346 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6694472.pem
	I0923 13:31:24.865348  700346 command_runner.go:130] > 3ec20f2e
	I0923 13:31:24.865451  700346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6694472.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 13:31:24.882267  700346 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 13:31:24.891053  700346 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 13:31:24.891096  700346 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0923 13:31:24.891105  700346 command_runner.go:130] > Device: 253,1	Inode: 6289960     Links: 1
	I0923 13:31:24.891114  700346 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0923 13:31:24.891123  700346 command_runner.go:130] > Access: 2024-09-23 13:24:38.817981363 +0000
	I0923 13:31:24.891129  700346 command_runner.go:130] > Modify: 2024-09-23 13:24:38.817981363 +0000
	I0923 13:31:24.891136  700346 command_runner.go:130] > Change: 2024-09-23 13:24:38.817981363 +0000
	I0923 13:31:24.891143  700346 command_runner.go:130] >  Birth: 2024-09-23 13:24:38.817981363 +0000
	I0923 13:31:24.891450  700346 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 13:31:24.901001  700346 command_runner.go:130] > Certificate will not expire
	I0923 13:31:24.901155  700346 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 13:31:24.906807  700346 command_runner.go:130] > Certificate will not expire
	I0923 13:31:24.906903  700346 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 13:31:24.912729  700346 command_runner.go:130] > Certificate will not expire
	I0923 13:31:24.912843  700346 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 13:31:24.918576  700346 command_runner.go:130] > Certificate will not expire
	I0923 13:31:24.918676  700346 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 13:31:24.924441  700346 command_runner.go:130] > Certificate will not expire
	I0923 13:31:24.924587  700346 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 13:31:24.930113  700346 command_runner.go:130] > Certificate will not expire
	I0923 13:31:24.930180  700346 kubeadm.go:392] StartCluster: {Name:multinode-851928 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-851928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.25 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:31:24.930331  700346 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 13:31:24.930398  700346 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 13:31:24.968169  700346 command_runner.go:130] > f3ee062c82e9627964304fba440efa3b6e5b3d497a3f92f9e9222fc249896983
	I0923 13:31:24.968208  700346 command_runner.go:130] > 1b801f8d1903dcc317703d0c9c8339254a8589f5c6b3f839975618b27f22cae2
	I0923 13:31:24.968217  700346 command_runner.go:130] > 56cde957d502ee7288dee888b768f3cff4ccf17d74731851e7bbb81a0e5a5d7f
	I0923 13:31:24.968228  700346 command_runner.go:130] > 618cba5848a3cc3bd892bbb9e1cade2bdfa5035a1d7614c5a697351c7cf6b194
	I0923 13:31:24.968237  700346 command_runner.go:130] > 0f70273abbde20fa97dc324c2b48d24df3559f02d2199042fbb7b615ac8c379c
	I0923 13:31:24.968246  700346 command_runner.go:130] > eec587e30a7bb93e57e0360e1ed4662c79a8eced62814cb35146c0dba40123e4
	I0923 13:31:24.968257  700346 command_runner.go:130] > 306c5ac12948941777bcc8958b4a6ed737c7f0b3c6501816a604e4fb0da5fe16
	I0923 13:31:24.968266  700346 command_runner.go:130] > 692d9ab32ac920c03e99a206737a75e8f420c2aa3047b251a9e76a8feefa6d7c
	I0923 13:31:24.968300  700346 cri.go:89] found id: "f3ee062c82e9627964304fba440efa3b6e5b3d497a3f92f9e9222fc249896983"
	I0923 13:31:24.968309  700346 cri.go:89] found id: "1b801f8d1903dcc317703d0c9c8339254a8589f5c6b3f839975618b27f22cae2"
	I0923 13:31:24.968313  700346 cri.go:89] found id: "56cde957d502ee7288dee888b768f3cff4ccf17d74731851e7bbb81a0e5a5d7f"
	I0923 13:31:24.968316  700346 cri.go:89] found id: "618cba5848a3cc3bd892bbb9e1cade2bdfa5035a1d7614c5a697351c7cf6b194"
	I0923 13:31:24.968319  700346 cri.go:89] found id: "0f70273abbde20fa97dc324c2b48d24df3559f02d2199042fbb7b615ac8c379c"
	I0923 13:31:24.968325  700346 cri.go:89] found id: "eec587e30a7bb93e57e0360e1ed4662c79a8eced62814cb35146c0dba40123e4"
	I0923 13:31:24.968328  700346 cri.go:89] found id: "306c5ac12948941777bcc8958b4a6ed737c7f0b3c6501816a604e4fb0da5fe16"
	I0923 13:31:24.968330  700346 cri.go:89] found id: "692d9ab32ac920c03e99a206737a75e8f420c2aa3047b251a9e76a8feefa6d7c"
	I0923 13:31:24.968333  700346 cri.go:89] found id: ""
	I0923 13:31:24.968378  700346 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 23 13:35:37 multinode-851928 crio[2731]: time="2024-09-23 13:35:37.597502180Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098537597477122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ec76bbc8-1871-4526-b350-d23c70266f67 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:35:37 multinode-851928 crio[2731]: time="2024-09-23 13:35:37.598379206Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22fb06d7-bb91-47dc-94ee-1952199ac48f name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:35:37 multinode-851928 crio[2731]: time="2024-09-23 13:35:37.598446848Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22fb06d7-bb91-47dc-94ee-1952199ac48f name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:35:37 multinode-851928 crio[2731]: time="2024-09-23 13:35:37.598872882Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:359df88602eac689ea6f8a2d7a9975a0547e1bed39f60881b82a92736e6cb009,PodSandboxId:f6165aa9f5232cfa3983ad7ba5f2b01443acb0172047f93b410cfee89ed7e6c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727098325018657974,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gl4bk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cd876f4-a4fd-466e-a84c-151e00179085,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99756f6019b4d7b958225ab6f9327a6b8c9203fbf3a2d830b5062cdd86647ba,PodSandboxId:ca0338566743328d45d72d7892aec5e33c54ac1153f64b0d8b1e540310a4ac9d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727098291493819395,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c8x2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11cb938f-96d8-4fc5-bec8-345a527fa45c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654185fbfcb03063b85dbe6773f55ad2831999a20283aff14d6229a7ce62f672,PodSandboxId:88dbc26750f98450a4227b6447782034ac35694dfbd57bd8a44b24bc4e3b3a16,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727098291533214262,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vwqlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 836d6606-1f23-4ad0-920e-4a58493501d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db6c0468a85317df9394dd318f8bc43400bc0ce88d7077411f6b535fd107e48,PodSandboxId:7a8400923cb977887a353ac861412e9718431dd173807754346e28fbbf73f550,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727098291368736908,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s52gf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc994d5-3cb5-4463-966b-b32c85869126,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de6979fac3f5b0c2eaef8551a532606c73e38cd83f774096644077a457b0c1ff,PodSandboxId:a4037f5308176d337b2deca77b44b8af2c73565b79e10ee4714a1a5d145710ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727098291320194016,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebe9874e-9033-47fb-a3a4-bce0f18c688e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ac0945c09a0432f1c1e1b73c691250779a94d3a34f10a893924ea884b2b3d0,PodSandboxId:d71d835adcd1d15844807d2a291e6fef6cb20e1b18b5172ee25e4cbbbac47f3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727098287473406807,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02835a6cac33d484359446ef024faa27,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24ae473221e9e088de9eef2ef703d5d3f0766f4014f7fbc1d037c679d3e2baac,PodSandboxId:347f0941c6625f27283194de8d4fd32006b3d380888a85678b2f8a7063e5aa4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727098287475020753,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 878441eec194631b842ef5e820e0ff09,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53e3cea42ba31eed103d200b57d82aaa7e8e100a2266c766104a3a52b620a95f,PodSandboxId:828939191cba88d1217296fdda58434ecfb1563b0377c4b0b25bbce93519acfb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727098287437499929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 356d5ed4adf1a57c3ad8edf8e104c7f7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e4fac777cc7ccf86d00ee7e26a7e351940e38e66d7ed676e56f1c859842e6bf,PodSandboxId:055632a81dcd102f90add8d8a980bfe8dc44e947f121f368e0a4854dc05c8b58,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727098287404054814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f132eb7802a4272b67ff520aaf3e0c91,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c269baf9d7255a9bfe6cfce34bb0d26bc2c217b16c3c6150bd0199bb43fe0fd,PodSandboxId:68ce621d149dbab89bbf1d40250aeafe88c567a5cda763b753d58dbe7b5983bb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727097962330352006,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gl4bk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cd876f4-a4fd-466e-a84c-151e00179085,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ee062c82e9627964304fba440efa3b6e5b3d497a3f92f9e9222fc249896983,PodSandboxId:7e8debb3a7a32a909f49fdbb6b4e160cf52736f78ed06f1b8109978ac36d9d4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727097906078294996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vwqlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 836d6606-1f23-4ad0-920e-4a58493501d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b801f8d1903dcc317703d0c9c8339254a8589f5c6b3f839975618b27f22cae2,PodSandboxId:93eb120762a7ecc71cd179f3bdad4a6269c404b8b84f7807e052cc87f2cbe855,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727097906023278582,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: ebe9874e-9033-47fb-a3a4-bce0f18c688e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56cde957d502ee7288dee888b768f3cff4ccf17d74731851e7bbb81a0e5a5d7f,PodSandboxId:6d7dd123595bb380fa71402185f5498f549c3dd5b06c7706f0731b2908d24371,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727097894067120166,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c8x2d,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 11cb938f-96d8-4fc5-bec8-345a527fa45c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618cba5848a3cc3bd892bbb9e1cade2bdfa5035a1d7614c5a697351c7cf6b194,PodSandboxId:799bf1c098365a9033865dcd72e0da7b176b1f4120dc73536f70e4e1709169ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727097893819208405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s52gf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc994d5-3cb5-4463-966b
-b32c85869126,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f70273abbde20fa97dc324c2b48d24df3559f02d2199042fbb7b615ac8c379c,PodSandboxId:ce7bd0baa5c47857eca3027f914a72474f6728c7a0efb049b7d513de9c55b8f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727097883225903642,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02835a6cac33d484359446ef024faa27,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306c5ac12948941777bcc8958b4a6ed737c7f0b3c6501816a604e4fb0da5fe16,PodSandboxId:8df9ec5d7f3b25e1c83cdfa42036161a0e93ee475c14e4a554ada703c1ce083a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727097883190370572,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 878441eec194631b842ef5e820e0ff09,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec587e30a7bb93e57e0360e1ed4662c79a8eced62814cb35146c0dba40123e4,PodSandboxId:25e766a6006484584935633eff06c186880f1751c10365776bd6e07ac9e5a007,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727097883221568719,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 356d5ed4adf1a57c3ad8edf8e104c7f7,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692d9ab32ac920c03e99a206737a75e8f420c2aa3047b251a9e76a8feefa6d7c,PodSandboxId:6f6f297d2bf0b1ed1d4a99885a76e9163ea663e071b08c4b311ac857f507925f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727097883164371598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f132eb7802a4272b67ff520aaf3e0c91,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22fb06d7-bb91-47dc-94ee-1952199ac48f name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:35:37 multinode-851928 crio[2731]: time="2024-09-23 13:35:37.640944397Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=46d68929-b2ec-46d6-a1b4-c8905058ea53 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:35:37 multinode-851928 crio[2731]: time="2024-09-23 13:35:37.641042951Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=46d68929-b2ec-46d6-a1b4-c8905058ea53 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:35:37 multinode-851928 crio[2731]: time="2024-09-23 13:35:37.642843976Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0730e755-0cff-4a4f-9d8c-18b483089bf1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:35:37 multinode-851928 crio[2731]: time="2024-09-23 13:35:37.643562100Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098537643528700,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0730e755-0cff-4a4f-9d8c-18b483089bf1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:35:37 multinode-851928 crio[2731]: time="2024-09-23 13:35:37.644247884Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6de412e-dda1-423f-a7f0-26ca5de2bd5e name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:35:37 multinode-851928 crio[2731]: time="2024-09-23 13:35:37.644318654Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6de412e-dda1-423f-a7f0-26ca5de2bd5e name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:35:37 multinode-851928 crio[2731]: time="2024-09-23 13:35:37.644860485Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:359df88602eac689ea6f8a2d7a9975a0547e1bed39f60881b82a92736e6cb009,PodSandboxId:f6165aa9f5232cfa3983ad7ba5f2b01443acb0172047f93b410cfee89ed7e6c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727098325018657974,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gl4bk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cd876f4-a4fd-466e-a84c-151e00179085,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99756f6019b4d7b958225ab6f9327a6b8c9203fbf3a2d830b5062cdd86647ba,PodSandboxId:ca0338566743328d45d72d7892aec5e33c54ac1153f64b0d8b1e540310a4ac9d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727098291493819395,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c8x2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11cb938f-96d8-4fc5-bec8-345a527fa45c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654185fbfcb03063b85dbe6773f55ad2831999a20283aff14d6229a7ce62f672,PodSandboxId:88dbc26750f98450a4227b6447782034ac35694dfbd57bd8a44b24bc4e3b3a16,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727098291533214262,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vwqlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 836d6606-1f23-4ad0-920e-4a58493501d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db6c0468a85317df9394dd318f8bc43400bc0ce88d7077411f6b535fd107e48,PodSandboxId:7a8400923cb977887a353ac861412e9718431dd173807754346e28fbbf73f550,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727098291368736908,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s52gf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc994d5-3cb5-4463-966b-b32c85869126,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de6979fac3f5b0c2eaef8551a532606c73e38cd83f774096644077a457b0c1ff,PodSandboxId:a4037f5308176d337b2deca77b44b8af2c73565b79e10ee4714a1a5d145710ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727098291320194016,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebe9874e-9033-47fb-a3a4-bce0f18c688e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ac0945c09a0432f1c1e1b73c691250779a94d3a34f10a893924ea884b2b3d0,PodSandboxId:d71d835adcd1d15844807d2a291e6fef6cb20e1b18b5172ee25e4cbbbac47f3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727098287473406807,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02835a6cac33d484359446ef024faa27,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24ae473221e9e088de9eef2ef703d5d3f0766f4014f7fbc1d037c679d3e2baac,PodSandboxId:347f0941c6625f27283194de8d4fd32006b3d380888a85678b2f8a7063e5aa4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727098287475020753,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 878441eec194631b842ef5e820e0ff09,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53e3cea42ba31eed103d200b57d82aaa7e8e100a2266c766104a3a52b620a95f,PodSandboxId:828939191cba88d1217296fdda58434ecfb1563b0377c4b0b25bbce93519acfb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727098287437499929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 356d5ed4adf1a57c3ad8edf8e104c7f7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e4fac777cc7ccf86d00ee7e26a7e351940e38e66d7ed676e56f1c859842e6bf,PodSandboxId:055632a81dcd102f90add8d8a980bfe8dc44e947f121f368e0a4854dc05c8b58,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727098287404054814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f132eb7802a4272b67ff520aaf3e0c91,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c269baf9d7255a9bfe6cfce34bb0d26bc2c217b16c3c6150bd0199bb43fe0fd,PodSandboxId:68ce621d149dbab89bbf1d40250aeafe88c567a5cda763b753d58dbe7b5983bb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727097962330352006,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gl4bk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cd876f4-a4fd-466e-a84c-151e00179085,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ee062c82e9627964304fba440efa3b6e5b3d497a3f92f9e9222fc249896983,PodSandboxId:7e8debb3a7a32a909f49fdbb6b4e160cf52736f78ed06f1b8109978ac36d9d4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727097906078294996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vwqlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 836d6606-1f23-4ad0-920e-4a58493501d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b801f8d1903dcc317703d0c9c8339254a8589f5c6b3f839975618b27f22cae2,PodSandboxId:93eb120762a7ecc71cd179f3bdad4a6269c404b8b84f7807e052cc87f2cbe855,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727097906023278582,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: ebe9874e-9033-47fb-a3a4-bce0f18c688e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56cde957d502ee7288dee888b768f3cff4ccf17d74731851e7bbb81a0e5a5d7f,PodSandboxId:6d7dd123595bb380fa71402185f5498f549c3dd5b06c7706f0731b2908d24371,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727097894067120166,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c8x2d,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 11cb938f-96d8-4fc5-bec8-345a527fa45c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618cba5848a3cc3bd892bbb9e1cade2bdfa5035a1d7614c5a697351c7cf6b194,PodSandboxId:799bf1c098365a9033865dcd72e0da7b176b1f4120dc73536f70e4e1709169ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727097893819208405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s52gf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc994d5-3cb5-4463-966b
-b32c85869126,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f70273abbde20fa97dc324c2b48d24df3559f02d2199042fbb7b615ac8c379c,PodSandboxId:ce7bd0baa5c47857eca3027f914a72474f6728c7a0efb049b7d513de9c55b8f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727097883225903642,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02835a6cac33d484359446ef024faa27,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306c5ac12948941777bcc8958b4a6ed737c7f0b3c6501816a604e4fb0da5fe16,PodSandboxId:8df9ec5d7f3b25e1c83cdfa42036161a0e93ee475c14e4a554ada703c1ce083a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727097883190370572,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 878441eec194631b842ef5e820e0ff09,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec587e30a7bb93e57e0360e1ed4662c79a8eced62814cb35146c0dba40123e4,PodSandboxId:25e766a6006484584935633eff06c186880f1751c10365776bd6e07ac9e5a007,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727097883221568719,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 356d5ed4adf1a57c3ad8edf8e104c7f7,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692d9ab32ac920c03e99a206737a75e8f420c2aa3047b251a9e76a8feefa6d7c,PodSandboxId:6f6f297d2bf0b1ed1d4a99885a76e9163ea663e071b08c4b311ac857f507925f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727097883164371598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f132eb7802a4272b67ff520aaf3e0c91,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b6de412e-dda1-423f-a7f0-26ca5de2bd5e name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:35:37 multinode-851928 crio[2731]: time="2024-09-23 13:35:37.690703164Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7ac5e124-14c5-4e0a-908d-c58fa47f6423 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:35:37 multinode-851928 crio[2731]: time="2024-09-23 13:35:37.690822834Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7ac5e124-14c5-4e0a-908d-c58fa47f6423 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:35:37 multinode-851928 crio[2731]: time="2024-09-23 13:35:37.692482030Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e0470aed-3328-46b4-90d2-74a9bf4acc7a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:35:37 multinode-851928 crio[2731]: time="2024-09-23 13:35:37.693223877Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098537693196968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e0470aed-3328-46b4-90d2-74a9bf4acc7a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:35:37 multinode-851928 crio[2731]: time="2024-09-23 13:35:37.694123170Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d248656-c27a-4615-ac42-089472f9c7c2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:35:37 multinode-851928 crio[2731]: time="2024-09-23 13:35:37.694197432Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d248656-c27a-4615-ac42-089472f9c7c2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:35:37 multinode-851928 crio[2731]: time="2024-09-23 13:35:37.694653511Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:359df88602eac689ea6f8a2d7a9975a0547e1bed39f60881b82a92736e6cb009,PodSandboxId:f6165aa9f5232cfa3983ad7ba5f2b01443acb0172047f93b410cfee89ed7e6c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727098325018657974,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gl4bk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cd876f4-a4fd-466e-a84c-151e00179085,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99756f6019b4d7b958225ab6f9327a6b8c9203fbf3a2d830b5062cdd86647ba,PodSandboxId:ca0338566743328d45d72d7892aec5e33c54ac1153f64b0d8b1e540310a4ac9d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727098291493819395,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c8x2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11cb938f-96d8-4fc5-bec8-345a527fa45c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654185fbfcb03063b85dbe6773f55ad2831999a20283aff14d6229a7ce62f672,PodSandboxId:88dbc26750f98450a4227b6447782034ac35694dfbd57bd8a44b24bc4e3b3a16,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727098291533214262,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vwqlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 836d6606-1f23-4ad0-920e-4a58493501d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db6c0468a85317df9394dd318f8bc43400bc0ce88d7077411f6b535fd107e48,PodSandboxId:7a8400923cb977887a353ac861412e9718431dd173807754346e28fbbf73f550,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727098291368736908,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s52gf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc994d5-3cb5-4463-966b-b32c85869126,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de6979fac3f5b0c2eaef8551a532606c73e38cd83f774096644077a457b0c1ff,PodSandboxId:a4037f5308176d337b2deca77b44b8af2c73565b79e10ee4714a1a5d145710ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727098291320194016,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebe9874e-9033-47fb-a3a4-bce0f18c688e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ac0945c09a0432f1c1e1b73c691250779a94d3a34f10a893924ea884b2b3d0,PodSandboxId:d71d835adcd1d15844807d2a291e6fef6cb20e1b18b5172ee25e4cbbbac47f3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727098287473406807,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02835a6cac33d484359446ef024faa27,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24ae473221e9e088de9eef2ef703d5d3f0766f4014f7fbc1d037c679d3e2baac,PodSandboxId:347f0941c6625f27283194de8d4fd32006b3d380888a85678b2f8a7063e5aa4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727098287475020753,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 878441eec194631b842ef5e820e0ff09,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53e3cea42ba31eed103d200b57d82aaa7e8e100a2266c766104a3a52b620a95f,PodSandboxId:828939191cba88d1217296fdda58434ecfb1563b0377c4b0b25bbce93519acfb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727098287437499929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 356d5ed4adf1a57c3ad8edf8e104c7f7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e4fac777cc7ccf86d00ee7e26a7e351940e38e66d7ed676e56f1c859842e6bf,PodSandboxId:055632a81dcd102f90add8d8a980bfe8dc44e947f121f368e0a4854dc05c8b58,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727098287404054814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f132eb7802a4272b67ff520aaf3e0c91,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c269baf9d7255a9bfe6cfce34bb0d26bc2c217b16c3c6150bd0199bb43fe0fd,PodSandboxId:68ce621d149dbab89bbf1d40250aeafe88c567a5cda763b753d58dbe7b5983bb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727097962330352006,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gl4bk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cd876f4-a4fd-466e-a84c-151e00179085,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ee062c82e9627964304fba440efa3b6e5b3d497a3f92f9e9222fc249896983,PodSandboxId:7e8debb3a7a32a909f49fdbb6b4e160cf52736f78ed06f1b8109978ac36d9d4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727097906078294996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vwqlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 836d6606-1f23-4ad0-920e-4a58493501d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b801f8d1903dcc317703d0c9c8339254a8589f5c6b3f839975618b27f22cae2,PodSandboxId:93eb120762a7ecc71cd179f3bdad4a6269c404b8b84f7807e052cc87f2cbe855,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727097906023278582,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: ebe9874e-9033-47fb-a3a4-bce0f18c688e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56cde957d502ee7288dee888b768f3cff4ccf17d74731851e7bbb81a0e5a5d7f,PodSandboxId:6d7dd123595bb380fa71402185f5498f549c3dd5b06c7706f0731b2908d24371,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727097894067120166,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c8x2d,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 11cb938f-96d8-4fc5-bec8-345a527fa45c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618cba5848a3cc3bd892bbb9e1cade2bdfa5035a1d7614c5a697351c7cf6b194,PodSandboxId:799bf1c098365a9033865dcd72e0da7b176b1f4120dc73536f70e4e1709169ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727097893819208405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s52gf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc994d5-3cb5-4463-966b
-b32c85869126,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f70273abbde20fa97dc324c2b48d24df3559f02d2199042fbb7b615ac8c379c,PodSandboxId:ce7bd0baa5c47857eca3027f914a72474f6728c7a0efb049b7d513de9c55b8f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727097883225903642,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02835a6cac33d484359446ef024faa27,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306c5ac12948941777bcc8958b4a6ed737c7f0b3c6501816a604e4fb0da5fe16,PodSandboxId:8df9ec5d7f3b25e1c83cdfa42036161a0e93ee475c14e4a554ada703c1ce083a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727097883190370572,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 878441eec194631b842ef5e820e0ff09,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec587e30a7bb93e57e0360e1ed4662c79a8eced62814cb35146c0dba40123e4,PodSandboxId:25e766a6006484584935633eff06c186880f1751c10365776bd6e07ac9e5a007,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727097883221568719,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 356d5ed4adf1a57c3ad8edf8e104c7f7,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692d9ab32ac920c03e99a206737a75e8f420c2aa3047b251a9e76a8feefa6d7c,PodSandboxId:6f6f297d2bf0b1ed1d4a99885a76e9163ea663e071b08c4b311ac857f507925f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727097883164371598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f132eb7802a4272b67ff520aaf3e0c91,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d248656-c27a-4615-ac42-089472f9c7c2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:35:37 multinode-851928 crio[2731]: time="2024-09-23 13:35:37.742679005Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1f18e417-8f78-4dce-b155-86f70161e189 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:35:37 multinode-851928 crio[2731]: time="2024-09-23 13:35:37.742905393Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1f18e417-8f78-4dce-b155-86f70161e189 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:35:37 multinode-851928 crio[2731]: time="2024-09-23 13:35:37.744213503Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f8b9a8da-48af-4f30-8688-ea8cf5b54795 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:35:37 multinode-851928 crio[2731]: time="2024-09-23 13:35:37.744809509Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098537744783581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f8b9a8da-48af-4f30-8688-ea8cf5b54795 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:35:37 multinode-851928 crio[2731]: time="2024-09-23 13:35:37.745310162Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9bdfb39d-e45e-4c67-ae7f-39d3a6cf06df name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:35:37 multinode-851928 crio[2731]: time="2024-09-23 13:35:37.745367659Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9bdfb39d-e45e-4c67-ae7f-39d3a6cf06df name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:35:37 multinode-851928 crio[2731]: time="2024-09-23 13:35:37.745766623Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:359df88602eac689ea6f8a2d7a9975a0547e1bed39f60881b82a92736e6cb009,PodSandboxId:f6165aa9f5232cfa3983ad7ba5f2b01443acb0172047f93b410cfee89ed7e6c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727098325018657974,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gl4bk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cd876f4-a4fd-466e-a84c-151e00179085,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99756f6019b4d7b958225ab6f9327a6b8c9203fbf3a2d830b5062cdd86647ba,PodSandboxId:ca0338566743328d45d72d7892aec5e33c54ac1153f64b0d8b1e540310a4ac9d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727098291493819395,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c8x2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11cb938f-96d8-4fc5-bec8-345a527fa45c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654185fbfcb03063b85dbe6773f55ad2831999a20283aff14d6229a7ce62f672,PodSandboxId:88dbc26750f98450a4227b6447782034ac35694dfbd57bd8a44b24bc4e3b3a16,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727098291533214262,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vwqlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 836d6606-1f23-4ad0-920e-4a58493501d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db6c0468a85317df9394dd318f8bc43400bc0ce88d7077411f6b535fd107e48,PodSandboxId:7a8400923cb977887a353ac861412e9718431dd173807754346e28fbbf73f550,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727098291368736908,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s52gf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc994d5-3cb5-4463-966b-b32c85869126,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de6979fac3f5b0c2eaef8551a532606c73e38cd83f774096644077a457b0c1ff,PodSandboxId:a4037f5308176d337b2deca77b44b8af2c73565b79e10ee4714a1a5d145710ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727098291320194016,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebe9874e-9033-47fb-a3a4-bce0f18c688e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ac0945c09a0432f1c1e1b73c691250779a94d3a34f10a893924ea884b2b3d0,PodSandboxId:d71d835adcd1d15844807d2a291e6fef6cb20e1b18b5172ee25e4cbbbac47f3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727098287473406807,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02835a6cac33d484359446ef024faa27,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24ae473221e9e088de9eef2ef703d5d3f0766f4014f7fbc1d037c679d3e2baac,PodSandboxId:347f0941c6625f27283194de8d4fd32006b3d380888a85678b2f8a7063e5aa4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727098287475020753,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 878441eec194631b842ef5e820e0ff09,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53e3cea42ba31eed103d200b57d82aaa7e8e100a2266c766104a3a52b620a95f,PodSandboxId:828939191cba88d1217296fdda58434ecfb1563b0377c4b0b25bbce93519acfb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727098287437499929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 356d5ed4adf1a57c3ad8edf8e104c7f7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e4fac777cc7ccf86d00ee7e26a7e351940e38e66d7ed676e56f1c859842e6bf,PodSandboxId:055632a81dcd102f90add8d8a980bfe8dc44e947f121f368e0a4854dc05c8b58,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727098287404054814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f132eb7802a4272b67ff520aaf3e0c91,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c269baf9d7255a9bfe6cfce34bb0d26bc2c217b16c3c6150bd0199bb43fe0fd,PodSandboxId:68ce621d149dbab89bbf1d40250aeafe88c567a5cda763b753d58dbe7b5983bb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727097962330352006,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gl4bk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cd876f4-a4fd-466e-a84c-151e00179085,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ee062c82e9627964304fba440efa3b6e5b3d497a3f92f9e9222fc249896983,PodSandboxId:7e8debb3a7a32a909f49fdbb6b4e160cf52736f78ed06f1b8109978ac36d9d4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727097906078294996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vwqlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 836d6606-1f23-4ad0-920e-4a58493501d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b801f8d1903dcc317703d0c9c8339254a8589f5c6b3f839975618b27f22cae2,PodSandboxId:93eb120762a7ecc71cd179f3bdad4a6269c404b8b84f7807e052cc87f2cbe855,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727097906023278582,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: ebe9874e-9033-47fb-a3a4-bce0f18c688e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56cde957d502ee7288dee888b768f3cff4ccf17d74731851e7bbb81a0e5a5d7f,PodSandboxId:6d7dd123595bb380fa71402185f5498f549c3dd5b06c7706f0731b2908d24371,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727097894067120166,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c8x2d,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 11cb938f-96d8-4fc5-bec8-345a527fa45c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618cba5848a3cc3bd892bbb9e1cade2bdfa5035a1d7614c5a697351c7cf6b194,PodSandboxId:799bf1c098365a9033865dcd72e0da7b176b1f4120dc73536f70e4e1709169ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727097893819208405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s52gf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc994d5-3cb5-4463-966b
-b32c85869126,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f70273abbde20fa97dc324c2b48d24df3559f02d2199042fbb7b615ac8c379c,PodSandboxId:ce7bd0baa5c47857eca3027f914a72474f6728c7a0efb049b7d513de9c55b8f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727097883225903642,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02835a6cac33d484359446ef024faa27,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306c5ac12948941777bcc8958b4a6ed737c7f0b3c6501816a604e4fb0da5fe16,PodSandboxId:8df9ec5d7f3b25e1c83cdfa42036161a0e93ee475c14e4a554ada703c1ce083a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727097883190370572,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 878441eec194631b842ef5e820e0ff09,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec587e30a7bb93e57e0360e1ed4662c79a8eced62814cb35146c0dba40123e4,PodSandboxId:25e766a6006484584935633eff06c186880f1751c10365776bd6e07ac9e5a007,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727097883221568719,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 356d5ed4adf1a57c3ad8edf8e104c7f7,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692d9ab32ac920c03e99a206737a75e8f420c2aa3047b251a9e76a8feefa6d7c,PodSandboxId:6f6f297d2bf0b1ed1d4a99885a76e9163ea663e071b08c4b311ac857f507925f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727097883164371598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851928,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f132eb7802a4272b67ff520aaf3e0c91,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9bdfb39d-e45e-4c67-ae7f-39d3a6cf06df name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	359df88602eac       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   f6165aa9f5232       busybox-7dff88458-gl4bk
	654185fbfcb03       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   1                   88dbc26750f98       coredns-7c65d6cfc9-vwqlq
	d99756f6019b4       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   ca03385667433       kindnet-c8x2d
	1db6c0468a853       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   7a8400923cb97       kube-proxy-s52gf
	de6979fac3f5b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   a4037f5308176       storage-provisioner
	24ae473221e9e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   347f0941c6625       etcd-multinode-851928
	f9ac0945c09a0       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   d71d835adcd1d       kube-scheduler-multinode-851928
	53e3cea42ba31       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   1                   828939191cba8       kube-controller-manager-multinode-851928
	9e4fac777cc7c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            1                   055632a81dcd1       kube-apiserver-multinode-851928
	3c269baf9d725       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   68ce621d149db       busybox-7dff88458-gl4bk
	f3ee062c82e96       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      10 minutes ago      Exited              coredns                   0                   7e8debb3a7a32       coredns-7c65d6cfc9-vwqlq
	1b801f8d1903d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   93eb120762a7e       storage-provisioner
	56cde957d502e       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      10 minutes ago      Exited              kindnet-cni               0                   6d7dd123595bb       kindnet-c8x2d
	618cba5848a3c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      10 minutes ago      Exited              kube-proxy                0                   799bf1c098365       kube-proxy-s52gf
	0f70273abbde2       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      10 minutes ago      Exited              kube-scheduler            0                   ce7bd0baa5c47       kube-scheduler-multinode-851928
	eec587e30a7bb       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      10 minutes ago      Exited              kube-controller-manager   0                   25e766a600648       kube-controller-manager-multinode-851928
	306c5ac129489       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   8df9ec5d7f3b2       etcd-multinode-851928
	692d9ab32ac92       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      10 minutes ago      Exited              kube-apiserver            0                   6f6f297d2bf0b       kube-apiserver-multinode-851928
	
	
	==> coredns [654185fbfcb03063b85dbe6773f55ad2831999a20283aff14d6229a7ce62f672] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44144 - 31565 "HINFO IN 2076000021483381523.7966737289315741758. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015286557s
	
	
	==> coredns [f3ee062c82e9627964304fba440efa3b6e5b3d497a3f92f9e9222fc249896983] <==
	[INFO] 10.244.0.3:55317 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002277841s
	[INFO] 10.244.0.3:53261 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000147555s
	[INFO] 10.244.0.3:47932 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000082196s
	[INFO] 10.244.0.3:48404 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001589907s
	[INFO] 10.244.0.3:53795 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00008073s
	[INFO] 10.244.0.3:47751 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082714s
	[INFO] 10.244.0.3:46818 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072306s
	[INFO] 10.244.1.2:46213 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167812s
	[INFO] 10.244.1.2:60560 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104495s
	[INFO] 10.244.1.2:52294 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085458s
	[INFO] 10.244.1.2:37666 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119278s
	[INFO] 10.244.0.3:39344 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102984s
	[INFO] 10.244.0.3:56258 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147189s
	[INFO] 10.244.0.3:44491 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088834s
	[INFO] 10.244.0.3:60210 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072307s
	[INFO] 10.244.1.2:38393 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134145s
	[INFO] 10.244.1.2:39793 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000226028s
	[INFO] 10.244.1.2:46395 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000140061s
	[INFO] 10.244.1.2:56458 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000125366s
	[INFO] 10.244.0.3:46222 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128771s
	[INFO] 10.244.0.3:48880 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000064585s
	[INFO] 10.244.0.3:51024 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074571s
	[INFO] 10.244.0.3:35502 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000055063s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-851928
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-851928
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=multinode-851928
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T13_24_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 13:24:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-851928
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:35:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:31:30 +0000   Mon, 23 Sep 2024 13:24:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:31:30 +0000   Mon, 23 Sep 2024 13:24:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:31:30 +0000   Mon, 23 Sep 2024 13:24:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:31:30 +0000   Mon, 23 Sep 2024 13:25:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.168
	  Hostname:    multinode-851928
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 12f97256f1164843ab6c37f2bd6746c2
	  System UUID:                12f97256-f116-4843-ab6c-37f2bd6746c2
	  Boot ID:                    f4ef7a41-b130-453c-b780-b9b1171eb465
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gl4bk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	  kube-system                 coredns-7c65d6cfc9-vwqlq                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-851928                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-c8x2d                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-851928             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-851928    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-s52gf                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-851928             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node multinode-851928 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node multinode-851928 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node multinode-851928 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-851928 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-851928 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-851928 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node multinode-851928 event: Registered Node multinode-851928 in Controller
	  Normal  NodeReady                10m                    kubelet          Node multinode-851928 status is now: NodeReady
	  Normal  Starting                 4m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m12s (x8 over 4m12s)  kubelet          Node multinode-851928 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m12s (x8 over 4m12s)  kubelet          Node multinode-851928 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m12s (x7 over 4m12s)  kubelet          Node multinode-851928 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m5s                   node-controller  Node multinode-851928 event: Registered Node multinode-851928 in Controller
	
	
	Name:               multinode-851928-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-851928-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=multinode-851928
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T13_32_09_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 13:32:08 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-851928-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:33:09 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 23 Sep 2024 13:32:39 +0000   Mon, 23 Sep 2024 13:33:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 23 Sep 2024 13:32:39 +0000   Mon, 23 Sep 2024 13:33:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 23 Sep 2024 13:32:39 +0000   Mon, 23 Sep 2024 13:33:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 23 Sep 2024 13:32:39 +0000   Mon, 23 Sep 2024 13:33:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.25
	  Hostname:    multinode-851928-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2078f891f8d843f0b52842500fda8541
	  System UUID:                2078f891-f8d8-43f0-b528-42500fda8541
	  Boot ID:                    cd73d277-2245-4a26-8011-80494fd2b5ff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-zrc2v    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 kindnet-wxjn6              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-tbjrf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m24s                  kube-proxy       
	  Normal  Starting                 9m56s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-851928-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-851928-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-851928-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m42s                  kubelet          Node multinode-851928-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m30s (x2 over 3m30s)  kubelet          Node multinode-851928-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m30s (x2 over 3m30s)  kubelet          Node multinode-851928-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m30s (x2 over 3m30s)  kubelet          Node multinode-851928-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m10s                  kubelet          Node multinode-851928-m02 status is now: NodeReady
	  Normal  NodeNotReady             105s                   node-controller  Node multinode-851928-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.064556] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055603] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.197628] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.126284] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.295588] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.943745] systemd-fstab-generator[741]: Ignoring "noauto" option for root device
	[  +4.026209] systemd-fstab-generator[872]: Ignoring "noauto" option for root device
	[  +0.058691] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.005525] systemd-fstab-generator[1213]: Ignoring "noauto" option for root device
	[  +0.085315] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.348922] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.306699] systemd-fstab-generator[1421]: Ignoring "noauto" option for root device
	[Sep23 13:25] kauditd_printk_skb: 60 callbacks suppressed
	[ +53.153149] kauditd_printk_skb: 12 callbacks suppressed
	[Sep23 13:31] systemd-fstab-generator[2655]: Ignoring "noauto" option for root device
	[  +0.145002] systemd-fstab-generator[2667]: Ignoring "noauto" option for root device
	[  +0.172038] systemd-fstab-generator[2681]: Ignoring "noauto" option for root device
	[  +0.135290] systemd-fstab-generator[2693]: Ignoring "noauto" option for root device
	[  +0.290138] systemd-fstab-generator[2722]: Ignoring "noauto" option for root device
	[  +0.776983] systemd-fstab-generator[2810]: Ignoring "noauto" option for root device
	[  +2.167724] systemd-fstab-generator[2931]: Ignoring "noauto" option for root device
	[  +4.696980] kauditd_printk_skb: 184 callbacks suppressed
	[  +5.875291] kauditd_printk_skb: 34 callbacks suppressed
	[  +7.270728] systemd-fstab-generator[3785]: Ignoring "noauto" option for root device
	[Sep23 13:32] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [24ae473221e9e088de9eef2ef703d5d3f0766f4014f7fbc1d037c679d3e2baac] <==
	{"level":"info","ts":"2024-09-23T13:31:27.997644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 switched to configuration voters=(16379515494576287720)"}
	{"level":"info","ts":"2024-09-23T13:31:27.997745Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f729467791c9db0d","local-member-id":"e34fba8f5739efe8","added-peer-id":"e34fba8f5739efe8","added-peer-peer-urls":["https://192.168.39.168:2380"]}
	{"level":"info","ts":"2024-09-23T13:31:27.997888Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f729467791c9db0d","local-member-id":"e34fba8f5739efe8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:31:27.997931Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:31:28.004251Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-23T13:31:28.004650Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"e34fba8f5739efe8","initial-advertise-peer-urls":["https://192.168.39.168:2380"],"listen-peer-urls":["https://192.168.39.168:2380"],"advertise-client-urls":["https://192.168.39.168:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.168:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-23T13:31:28.004696Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-23T13:31:28.004807Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.168:2380"}
	{"level":"info","ts":"2024-09-23T13:31:28.004828Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.168:2380"}
	{"level":"info","ts":"2024-09-23T13:31:29.034194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-23T13:31:29.034324Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-23T13:31:29.034393Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 received MsgPreVoteResp from e34fba8f5739efe8 at term 2"}
	{"level":"info","ts":"2024-09-23T13:31:29.034430Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 became candidate at term 3"}
	{"level":"info","ts":"2024-09-23T13:31:29.034455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 received MsgVoteResp from e34fba8f5739efe8 at term 3"}
	{"level":"info","ts":"2024-09-23T13:31:29.034481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 became leader at term 3"}
	{"level":"info","ts":"2024-09-23T13:31:29.034510Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e34fba8f5739efe8 elected leader e34fba8f5739efe8 at term 3"}
	{"level":"info","ts":"2024-09-23T13:31:29.041301Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e34fba8f5739efe8","local-member-attributes":"{Name:multinode-851928 ClientURLs:[https://192.168.39.168:2379]}","request-path":"/0/members/e34fba8f5739efe8/attributes","cluster-id":"f729467791c9db0d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T13:31:29.041416Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T13:31:29.041441Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T13:31:29.042167Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T13:31:29.042243Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T13:31:29.042941Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T13:31:29.043193Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T13:31:29.043764Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.168:2379"}
	{"level":"info","ts":"2024-09-23T13:31:29.043932Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [306c5ac12948941777bcc8958b4a6ed737c7f0b3c6501816a604e4fb0da5fe16] <==
	{"level":"info","ts":"2024-09-23T13:24:44.573381Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T13:24:44.575474Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.168:2379"}
	{"level":"info","ts":"2024-09-23T13:24:44.573417Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f729467791c9db0d","local-member-id":"e34fba8f5739efe8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:24:44.583668Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:24:44.583718Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:25:35.276328Z","caller":"traceutil/trace.go:171","msg":"trace[486577434] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"233.284853ms","start":"2024-09-23T13:25:35.043027Z","end":"2024-09-23T13:25:35.276312Z","steps":["trace[486577434] 'process raft request'  (duration: 213.187421ms)","trace[486577434] 'compare'  (duration: 19.944682ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T13:25:35.277038Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.241161ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-851928-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T13:25:35.277095Z","caller":"traceutil/trace.go:171","msg":"trace[1220359028] range","detail":"{range_begin:/registry/minions/multinode-851928-m02; range_end:; response_count:0; response_revision:472; }","duration":"161.370509ms","start":"2024-09-23T13:25:35.115715Z","end":"2024-09-23T13:25:35.277086Z","steps":["trace[1220359028] 'agreement among raft nodes before linearized reading'  (duration: 161.148351ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T13:25:35.277290Z","caller":"traceutil/trace.go:171","msg":"trace[911968721] linearizableReadLoop","detail":"{readStateIndex:492; appliedIndex:491; }","duration":"160.536747ms","start":"2024-09-23T13:25:35.115721Z","end":"2024-09-23T13:25:35.276258Z","steps":["trace[911968721] 'read index received'  (duration: 140.456701ms)","trace[911968721] 'applied index is now lower than readState.Index'  (duration: 20.079109ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T13:26:34.717908Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.480109ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17287227831743210186 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-851928-m03.17f7e279b440da84\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-851928-m03.17f7e279b440da84\" value_size:642 lease:8063855794888434060 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-23T13:26:34.718049Z","caller":"traceutil/trace.go:171","msg":"trace[1960587685] linearizableReadLoop","detail":"{readStateIndex:644; appliedIndex:643; }","duration":"196.913721ms","start":"2024-09-23T13:26:34.521105Z","end":"2024-09-23T13:26:34.718019Z","steps":["trace[1960587685] 'read index received'  (duration: 84.531511ms)","trace[1960587685] 'applied index is now lower than readState.Index'  (duration: 112.381352ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T13:26:34.718116Z","caller":"traceutil/trace.go:171","msg":"trace[1188400364] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"237.415432ms","start":"2024-09-23T13:26:34.480685Z","end":"2024-09-23T13:26:34.718101Z","steps":["trace[1188400364] 'process raft request'  (duration: 124.969317ms)","trace[1188400364] 'compare'  (duration: 111.157939ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T13:26:34.718470Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.372899ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-851928-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T13:26:34.718518Z","caller":"traceutil/trace.go:171","msg":"trace[1742467683] range","detail":"{range_begin:/registry/minions/multinode-851928-m03; range_end:; response_count:0; response_revision:610; }","duration":"197.422802ms","start":"2024-09-23T13:26:34.521083Z","end":"2024-09-23T13:26:34.718505Z","steps":["trace[1742467683] 'agreement among raft nodes before linearized reading'  (duration: 197.35752ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T13:26:36.497309Z","caller":"traceutil/trace.go:171","msg":"trace[1319836794] transaction","detail":"{read_only:false; response_revision:636; number_of_response:1; }","duration":"163.638701ms","start":"2024-09-23T13:26:36.333657Z","end":"2024-09-23T13:26:36.497296Z","steps":["trace[1319836794] 'process raft request'  (duration: 163.529076ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T13:29:51.493822Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-23T13:29:51.493932Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-851928","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.168:2380"],"advertise-client-urls":["https://192.168.39.168:2379"]}
	{"level":"warn","ts":"2024-09-23T13:29:51.494035Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T13:29:51.494140Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T13:29:51.554534Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.168:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T13:29:51.554658Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.168:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-23T13:29:51.554825Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e34fba8f5739efe8","current-leader-member-id":"e34fba8f5739efe8"}
	{"level":"info","ts":"2024-09-23T13:29:51.557947Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.168:2380"}
	{"level":"info","ts":"2024-09-23T13:29:51.558122Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.168:2380"}
	{"level":"info","ts":"2024-09-23T13:29:51.558170Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-851928","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.168:2380"],"advertise-client-urls":["https://192.168.39.168:2379"]}
	
	
	==> kernel <==
	 13:35:38 up 11 min,  0 users,  load average: 0.15, 0.34, 0.20
	Linux multinode-851928 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [56cde957d502ee7288dee888b768f3cff4ccf17d74731851e7bbb81a0e5a5d7f] <==
	I0923 13:29:05.154837       1 main.go:322] Node multinode-851928-m03 has CIDR [10.244.3.0/24] 
	I0923 13:29:15.155424       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0923 13:29:15.155469       1 main.go:299] handling current node
	I0923 13:29:15.155517       1 main.go:295] Handling node with IPs: map[192.168.39.25:{}]
	I0923 13:29:15.155523       1 main.go:322] Node multinode-851928-m02 has CIDR [10.244.1.0/24] 
	I0923 13:29:15.155720       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0923 13:29:15.155746       1 main.go:322] Node multinode-851928-m03 has CIDR [10.244.3.0/24] 
	I0923 13:29:25.150504       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0923 13:29:25.150702       1 main.go:322] Node multinode-851928-m03 has CIDR [10.244.3.0/24] 
	I0923 13:29:25.150921       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0923 13:29:25.150957       1 main.go:299] handling current node
	I0923 13:29:25.150985       1 main.go:295] Handling node with IPs: map[192.168.39.25:{}]
	I0923 13:29:25.151006       1 main.go:322] Node multinode-851928-m02 has CIDR [10.244.1.0/24] 
	I0923 13:29:35.154266       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0923 13:29:35.154325       1 main.go:299] handling current node
	I0923 13:29:35.154349       1 main.go:295] Handling node with IPs: map[192.168.39.25:{}]
	I0923 13:29:35.154355       1 main.go:322] Node multinode-851928-m02 has CIDR [10.244.1.0/24] 
	I0923 13:29:35.154492       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0923 13:29:35.154509       1 main.go:322] Node multinode-851928-m03 has CIDR [10.244.3.0/24] 
	I0923 13:29:45.154730       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0923 13:29:45.154903       1 main.go:299] handling current node
	I0923 13:29:45.154970       1 main.go:295] Handling node with IPs: map[192.168.39.25:{}]
	I0923 13:29:45.154992       1 main.go:322] Node multinode-851928-m02 has CIDR [10.244.1.0/24] 
	I0923 13:29:45.155156       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0923 13:29:45.155180       1 main.go:322] Node multinode-851928-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [d99756f6019b4d7b958225ab6f9327a6b8c9203fbf3a2d830b5062cdd86647ba] <==
	I0923 13:34:32.464790       1 main.go:322] Node multinode-851928-m02 has CIDR [10.244.1.0/24] 
	I0923 13:34:42.472995       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0923 13:34:42.473061       1 main.go:299] handling current node
	I0923 13:34:42.473079       1 main.go:295] Handling node with IPs: map[192.168.39.25:{}]
	I0923 13:34:42.473086       1 main.go:322] Node multinode-851928-m02 has CIDR [10.244.1.0/24] 
	I0923 13:34:52.473295       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0923 13:34:52.473347       1 main.go:299] handling current node
	I0923 13:34:52.473364       1 main.go:295] Handling node with IPs: map[192.168.39.25:{}]
	I0923 13:34:52.473370       1 main.go:322] Node multinode-851928-m02 has CIDR [10.244.1.0/24] 
	I0923 13:35:02.467320       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0923 13:35:02.467375       1 main.go:299] handling current node
	I0923 13:35:02.467393       1 main.go:295] Handling node with IPs: map[192.168.39.25:{}]
	I0923 13:35:02.467400       1 main.go:322] Node multinode-851928-m02 has CIDR [10.244.1.0/24] 
	I0923 13:35:12.472912       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0923 13:35:12.473007       1 main.go:299] handling current node
	I0923 13:35:12.473058       1 main.go:295] Handling node with IPs: map[192.168.39.25:{}]
	I0923 13:35:12.473064       1 main.go:322] Node multinode-851928-m02 has CIDR [10.244.1.0/24] 
	I0923 13:35:22.474158       1 main.go:295] Handling node with IPs: map[192.168.39.25:{}]
	I0923 13:35:22.474247       1 main.go:322] Node multinode-851928-m02 has CIDR [10.244.1.0/24] 
	I0923 13:35:22.474394       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0923 13:35:22.474401       1 main.go:299] handling current node
	I0923 13:35:32.464389       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0923 13:35:32.464464       1 main.go:299] handling current node
	I0923 13:35:32.464486       1 main.go:295] Handling node with IPs: map[192.168.39.25:{}]
	I0923 13:35:32.464493       1 main.go:322] Node multinode-851928-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [692d9ab32ac920c03e99a206737a75e8f420c2aa3047b251a9e76a8feefa6d7c] <==
	I0923 13:29:51.503913       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0923 13:29:51.504002       1 controller.go:157] Shutting down quota evaluator
	I0923 13:29:51.504014       1 controller.go:176] quota evaluator worker shutdown
	I0923 13:29:51.504521       1 controller.go:176] quota evaluator worker shutdown
	I0923 13:29:51.504531       1 controller.go:176] quota evaluator worker shutdown
	I0923 13:29:51.504535       1 controller.go:176] quota evaluator worker shutdown
	I0923 13:29:51.504540       1 controller.go:176] quota evaluator worker shutdown
	I0923 13:29:51.505805       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0923 13:29:51.507453       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0923 13:29:51.507566       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0923 13:29:51.507660       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0923 13:29:51.507859       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0923 13:29:51.507898       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0923 13:29:51.509543       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0923 13:29:51.509727       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0923 13:29:51.509799       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0923 13:29:51.511672       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	W0923 13:29:51.520694       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:29:51.520764       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:29:51.520800       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:29:51.520834       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:29:51.520865       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:29:51.520912       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:29:51.520943       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:29:51.520991       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [9e4fac777cc7ccf86d00ee7e26a7e351940e38e66d7ed676e56f1c859842e6bf] <==
	I0923 13:31:30.433980       1 aggregator.go:171] initial CRD sync complete...
	I0923 13:31:30.434000       1 autoregister_controller.go:144] Starting autoregister controller
	I0923 13:31:30.434007       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0923 13:31:30.435863       1 shared_informer.go:320] Caches are synced for configmaps
	I0923 13:31:30.470842       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0923 13:31:30.476161       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0923 13:31:30.476207       1 policy_source.go:224] refreshing policies
	I0923 13:31:30.480670       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0923 13:31:30.480757       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0923 13:31:30.480893       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0923 13:31:30.482649       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0923 13:31:30.482720       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0923 13:31:30.482859       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0923 13:31:30.486685       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0923 13:31:30.494374       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0923 13:31:30.536289       1 cache.go:39] Caches are synced for autoregister controller
	I0923 13:31:30.536942       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0923 13:31:31.298719       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0923 13:31:32.759918       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0923 13:31:32.892756       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0923 13:31:32.913910       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0923 13:31:33.036399       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0923 13:31:33.051938       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0923 13:31:33.933328       1 controller.go:615] quota admission added evaluator for: endpoints
	I0923 13:31:34.136237       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [53e3cea42ba31eed103d200b57d82aaa7e8e100a2266c766104a3a52b620a95f] <==
	I0923 13:32:48.948377       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-851928-m03" podCIDRs=["10.244.2.0/24"]
	I0923 13:32:48.948420       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:32:48.948531       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:32:48.965115       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:32:48.979644       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:32:49.332257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:32:53.980077       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:32:59.082126       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:33:07.775515       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851928-m02"
	I0923 13:33:07.775675       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:33:07.792189       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:33:08.903197       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:33:12.472502       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:33:12.488950       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:33:13.042022       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851928-m02"
	I0923 13:33:13.042190       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:33:53.923968       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m02"
	I0923 13:33:53.952432       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m02"
	I0923 13:33:53.982015       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="26.533916ms"
	I0923 13:33:53.982359       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="98.788µs"
	I0923 13:33:59.003239       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m02"
	I0923 13:34:13.857945       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-vx85t"
	I0923 13:34:13.910718       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-vx85t"
	I0923 13:34:13.910929       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-w8srs"
	I0923 13:34:13.954837       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-w8srs"
	
	
	==> kube-controller-manager [eec587e30a7bb93e57e0360e1ed4662c79a8eced62814cb35146c0dba40123e4] <==
	I0923 13:27:24.413888       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:27:24.660152       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851928-m02"
	I0923 13:27:24.660955       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:27:25.901848       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-851928-m03\" does not exist"
	I0923 13:27:25.902178       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851928-m02"
	I0923 13:27:25.925450       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-851928-m03" podCIDRs=["10.244.3.0/24"]
	I0923 13:27:25.925721       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:27:25.926859       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:27:26.215847       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:27:26.560219       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:27:27.642182       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:27:36.038126       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:27:45.983357       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:27:45.983675       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851928-m02"
	I0923 13:27:45.995332       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:27:47.590984       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:28:27.609939       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:28:27.610801       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851928-m02"
	I0923 13:28:27.634908       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:28:32.655476       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m02"
	I0923 13:28:32.670522       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m03"
	I0923 13:28:32.676955       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m02"
	I0923 13:28:32.710374       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.41328ms"
	I0923 13:28:32.710624       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="103.574µs"
	I0923 13:28:42.755074       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-851928-m02"
	
	
	==> kube-proxy [1db6c0468a85317df9394dd318f8bc43400bc0ce88d7077411f6b535fd107e48] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 13:31:31.818141       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 13:31:31.828663       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.168"]
	E0923 13:31:31.828878       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 13:31:31.886990       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 13:31:31.887130       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 13:31:31.887209       1 server_linux.go:169] "Using iptables Proxier"
	I0923 13:31:31.891181       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 13:31:31.891762       1 server.go:483] "Version info" version="v1.31.1"
	I0923 13:31:31.891961       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:31:31.893536       1 config.go:199] "Starting service config controller"
	I0923 13:31:31.893636       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 13:31:31.893668       1 config.go:105] "Starting endpoint slice config controller"
	I0923 13:31:31.893684       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 13:31:31.894217       1 config.go:328] "Starting node config controller"
	I0923 13:31:31.894239       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 13:31:31.994208       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 13:31:31.994249       1 shared_informer.go:320] Caches are synced for service config
	I0923 13:31:31.994500       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [618cba5848a3cc3bd892bbb9e1cade2bdfa5035a1d7614c5a697351c7cf6b194] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 13:24:54.178352       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 13:24:54.212118       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.168"]
	E0923 13:24:54.212218       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 13:24:54.266356       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 13:24:54.266397       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 13:24:54.266420       1 server_linux.go:169] "Using iptables Proxier"
	I0923 13:24:54.269106       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 13:24:54.269408       1 server.go:483] "Version info" version="v1.31.1"
	I0923 13:24:54.269430       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:24:54.270804       1 config.go:199] "Starting service config controller"
	I0923 13:24:54.270839       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 13:24:54.270869       1 config.go:105] "Starting endpoint slice config controller"
	I0923 13:24:54.270885       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 13:24:54.271353       1 config.go:328] "Starting node config controller"
	I0923 13:24:54.271380       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 13:24:54.371619       1 shared_informer.go:320] Caches are synced for service config
	I0923 13:24:54.371706       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 13:24:54.371532       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0f70273abbde20fa97dc324c2b48d24df3559f02d2199042fbb7b615ac8c379c] <==
	E0923 13:24:45.873759       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:45.873851       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 13:24:45.873874       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:45.873966       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 13:24:45.874029       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:45.874177       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 13:24:45.874240       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0923 13:24:46.703431       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 13:24:46.703487       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:46.717006       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 13:24:46.717061       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:46.726183       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 13:24:46.726281       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:46.867734       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 13:24:46.867783       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:47.007746       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 13:24:47.007800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:47.044669       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 13:24:47.044714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:47.150710       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 13:24:47.150857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:47.466480       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 13:24:47.467022       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0923 13:24:49.561328       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0923 13:29:51.506138       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f9ac0945c09a0432f1c1e1b73c691250779a94d3a34f10a893924ea884b2b3d0] <==
	W0923 13:31:30.425306       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 13:31:30.428217       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:31:30.425359       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 13:31:30.428271       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 13:31:30.425404       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 13:31:30.428324       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 13:31:30.425488       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 13:31:30.428376       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:31:30.425539       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 13:31:30.428428       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:31:30.427713       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 13:31:30.428485       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 13:31:30.427784       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 13:31:30.428539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:31:30.427846       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 13:31:30.428649       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:31:30.427894       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 13:31:30.428710       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:31:30.427940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 13:31:30.428760       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:31:30.427990       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 13:31:30.428812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:31:30.428841       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 13:31:30.428875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0923 13:31:31.599289       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 13:34:26 multinode-851928 kubelet[2938]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 13:34:26 multinode-851928 kubelet[2938]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 13:34:26 multinode-851928 kubelet[2938]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 13:34:26 multinode-851928 kubelet[2938]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 13:34:26 multinode-851928 kubelet[2938]: E0923 13:34:26.907436    2938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098466907098087,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:34:26 multinode-851928 kubelet[2938]: E0923 13:34:26.907480    2938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098466907098087,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:34:36 multinode-851928 kubelet[2938]: E0923 13:34:36.909317    2938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098476908898516,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:34:36 multinode-851928 kubelet[2938]: E0923 13:34:36.909803    2938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098476908898516,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:34:46 multinode-851928 kubelet[2938]: E0923 13:34:46.911792    2938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098486911315844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:34:46 multinode-851928 kubelet[2938]: E0923 13:34:46.911841    2938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098486911315844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:34:56 multinode-851928 kubelet[2938]: E0923 13:34:56.914243    2938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098496913875389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:34:56 multinode-851928 kubelet[2938]: E0923 13:34:56.914544    2938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098496913875389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:35:06 multinode-851928 kubelet[2938]: E0923 13:35:06.916741    2938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098506916258860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:35:06 multinode-851928 kubelet[2938]: E0923 13:35:06.916767    2938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098506916258860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:35:16 multinode-851928 kubelet[2938]: E0923 13:35:16.920013    2938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098516919454205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:35:16 multinode-851928 kubelet[2938]: E0923 13:35:16.920311    2938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098516919454205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:35:26 multinode-851928 kubelet[2938]: E0923 13:35:26.799464    2938 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 13:35:26 multinode-851928 kubelet[2938]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 13:35:26 multinode-851928 kubelet[2938]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 13:35:26 multinode-851928 kubelet[2938]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 13:35:26 multinode-851928 kubelet[2938]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 13:35:26 multinode-851928 kubelet[2938]: E0923 13:35:26.923738    2938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098526923208471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:35:26 multinode-851928 kubelet[2938]: E0923 13:35:26.923765    2938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098526923208471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:35:36 multinode-851928 kubelet[2938]: E0923 13:35:36.926090    2938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098536925677691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:35:36 multinode-851928 kubelet[2938]: E0923 13:35:36.926675    2938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098536925677691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 13:35:37.334495  702310 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19690-662205/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-851928 -n multinode-851928
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-851928 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (144.70s)

                                                
                                    
x
+
TestPreload (175.51s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-590749 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0923 13:40:19.922717  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:40:29.177668  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:40:36.850539  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-590749 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m33.691477517s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-590749 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-590749 image pull gcr.io/k8s-minikube/busybox: (3.574149551s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-590749
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-590749: (7.300782957s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-590749 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-590749 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m7.751119184s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-590749 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-09-23 13:42:29.524214979 +0000 UTC m=+4482.959630450
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-590749 -n test-preload-590749
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-590749 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-590749 logs -n 25: (1.116826551s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-851928 ssh -n                                                                 | multinode-851928     | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-851928 ssh -n multinode-851928 sudo cat                                       | multinode-851928     | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | /home/docker/cp-test_multinode-851928-m03_multinode-851928.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-851928 cp multinode-851928-m03:/home/docker/cp-test.txt                       | multinode-851928     | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928-m02:/home/docker/cp-test_multinode-851928-m03_multinode-851928-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-851928 ssh -n                                                                 | multinode-851928     | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | multinode-851928-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-851928 ssh -n multinode-851928-m02 sudo cat                                   | multinode-851928     | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | /home/docker/cp-test_multinode-851928-m03_multinode-851928-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-851928 node stop m03                                                          | multinode-851928     | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	| node    | multinode-851928 node start                                                             | multinode-851928     | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:27 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-851928                                                                | multinode-851928     | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC |                     |
	| stop    | -p multinode-851928                                                                     | multinode-851928     | jenkins | v1.34.0 | 23 Sep 24 13:27 UTC |                     |
	| start   | -p multinode-851928                                                                     | multinode-851928     | jenkins | v1.34.0 | 23 Sep 24 13:29 UTC | 23 Sep 24 13:33 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-851928                                                                | multinode-851928     | jenkins | v1.34.0 | 23 Sep 24 13:33 UTC |                     |
	| node    | multinode-851928 node delete                                                            | multinode-851928     | jenkins | v1.34.0 | 23 Sep 24 13:33 UTC | 23 Sep 24 13:33 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-851928 stop                                                                   | multinode-851928     | jenkins | v1.34.0 | 23 Sep 24 13:33 UTC |                     |
	| start   | -p multinode-851928                                                                     | multinode-851928     | jenkins | v1.34.0 | 23 Sep 24 13:35 UTC | 23 Sep 24 13:38 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-851928                                                                | multinode-851928     | jenkins | v1.34.0 | 23 Sep 24 13:38 UTC |                     |
	| start   | -p multinode-851928-m02                                                                 | multinode-851928-m02 | jenkins | v1.34.0 | 23 Sep 24 13:38 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-851928-m03                                                                 | multinode-851928-m03 | jenkins | v1.34.0 | 23 Sep 24 13:38 UTC | 23 Sep 24 13:39 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-851928                                                                 | multinode-851928     | jenkins | v1.34.0 | 23 Sep 24 13:39 UTC |                     |
	| delete  | -p multinode-851928-m03                                                                 | multinode-851928-m03 | jenkins | v1.34.0 | 23 Sep 24 13:39 UTC | 23 Sep 24 13:39 UTC |
	| delete  | -p multinode-851928                                                                     | multinode-851928     | jenkins | v1.34.0 | 23 Sep 24 13:39 UTC | 23 Sep 24 13:39 UTC |
	| start   | -p test-preload-590749                                                                  | test-preload-590749  | jenkins | v1.34.0 | 23 Sep 24 13:39 UTC | 23 Sep 24 13:41 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-590749 image pull                                                          | test-preload-590749  | jenkins | v1.34.0 | 23 Sep 24 13:41 UTC | 23 Sep 24 13:41 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-590749                                                                  | test-preload-590749  | jenkins | v1.34.0 | 23 Sep 24 13:41 UTC | 23 Sep 24 13:41 UTC |
	| start   | -p test-preload-590749                                                                  | test-preload-590749  | jenkins | v1.34.0 | 23 Sep 24 13:41 UTC | 23 Sep 24 13:42 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-590749 image list                                                          | test-preload-590749  | jenkins | v1.34.0 | 23 Sep 24 13:42 UTC | 23 Sep 24 13:42 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 13:41:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 13:41:21.580486  704724 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:41:21.580621  704724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:41:21.580631  704724 out.go:358] Setting ErrFile to fd 2...
	I0923 13:41:21.580636  704724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:41:21.580822  704724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-662205/.minikube/bin
	I0923 13:41:21.581389  704724 out.go:352] Setting JSON to false
	I0923 13:41:21.582381  704724 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":12225,"bootTime":1727086657,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 13:41:21.582489  704724 start.go:139] virtualization: kvm guest
	I0923 13:41:21.584744  704724 out.go:177] * [test-preload-590749] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 13:41:21.586275  704724 notify.go:220] Checking for updates...
	I0923 13:41:21.586334  704724 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 13:41:21.587738  704724 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:41:21.589045  704724 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 13:41:21.590438  704724 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 13:41:21.591797  704724 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 13:41:21.593734  704724 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 13:41:21.595261  704724 config.go:182] Loaded profile config "test-preload-590749": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0923 13:41:21.595668  704724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:41:21.595730  704724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:41:21.611809  704724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42313
	I0923 13:41:21.612312  704724 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:41:21.612905  704724 main.go:141] libmachine: Using API Version  1
	I0923 13:41:21.612933  704724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:41:21.613271  704724 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:41:21.613465  704724 main.go:141] libmachine: (test-preload-590749) Calling .DriverName
	I0923 13:41:21.615671  704724 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0923 13:41:21.617120  704724 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:41:21.617470  704724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:41:21.617529  704724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:41:21.633388  704724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43207
	I0923 13:41:21.634018  704724 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:41:21.634563  704724 main.go:141] libmachine: Using API Version  1
	I0923 13:41:21.634587  704724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:41:21.634914  704724 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:41:21.635134  704724 main.go:141] libmachine: (test-preload-590749) Calling .DriverName
	I0923 13:41:21.671223  704724 out.go:177] * Using the kvm2 driver based on existing profile
	I0923 13:41:21.672555  704724 start.go:297] selected driver: kvm2
	I0923 13:41:21.672578  704724 start.go:901] validating driver "kvm2" against &{Name:test-preload-590749 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-590749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:41:21.672709  704724 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 13:41:21.673431  704724 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 13:41:21.673536  704724 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19690-662205/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 13:41:21.689462  704724 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 13:41:21.689872  704724 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:41:21.689908  704724 cni.go:84] Creating CNI manager for ""
	I0923 13:41:21.689955  704724 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 13:41:21.690007  704724 start.go:340] cluster config:
	{Name:test-preload-590749 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-590749 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:41:21.690107  704724 iso.go:125] acquiring lock: {Name:mkb968a95eae3838cd5c328cf3385c2ef4ff2c8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 13:41:21.692629  704724 out.go:177] * Starting "test-preload-590749" primary control-plane node in "test-preload-590749" cluster
	I0923 13:41:21.693911  704724 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0923 13:41:21.793420  704724 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0923 13:41:21.793454  704724 cache.go:56] Caching tarball of preloaded images
	I0923 13:41:21.793631  704724 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0923 13:41:21.795455  704724 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0923 13:41:21.796903  704724 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0923 13:41:21.897619  704724 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0923 13:41:33.108381  704724 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0923 13:41:33.108511  704724 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0923 13:41:33.976919  704724 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0923 13:41:33.977093  704724 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/test-preload-590749/config.json ...
	I0923 13:41:33.977368  704724 start.go:360] acquireMachinesLock for test-preload-590749: {Name:mka98570d4b4becad22300323f1f88e64743eec3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 13:41:33.977455  704724 start.go:364] duration metric: took 56.296µs to acquireMachinesLock for "test-preload-590749"
	I0923 13:41:33.977477  704724 start.go:96] Skipping create...Using existing machine configuration
	I0923 13:41:33.977486  704724 fix.go:54] fixHost starting: 
	I0923 13:41:33.977762  704724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:41:33.977810  704724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:41:33.993153  704724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42081
	I0923 13:41:33.993735  704724 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:41:33.994302  704724 main.go:141] libmachine: Using API Version  1
	I0923 13:41:33.994320  704724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:41:33.994649  704724 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:41:33.994853  704724 main.go:141] libmachine: (test-preload-590749) Calling .DriverName
	I0923 13:41:33.995006  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetState
	I0923 13:41:33.996729  704724 fix.go:112] recreateIfNeeded on test-preload-590749: state=Stopped err=<nil>
	I0923 13:41:33.996753  704724 main.go:141] libmachine: (test-preload-590749) Calling .DriverName
	W0923 13:41:33.996910  704724 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 13:41:33.999572  704724 out.go:177] * Restarting existing kvm2 VM for "test-preload-590749" ...
	I0923 13:41:34.000907  704724 main.go:141] libmachine: (test-preload-590749) Calling .Start
	I0923 13:41:34.001104  704724 main.go:141] libmachine: (test-preload-590749) Ensuring networks are active...
	I0923 13:41:34.001982  704724 main.go:141] libmachine: (test-preload-590749) Ensuring network default is active
	I0923 13:41:34.002278  704724 main.go:141] libmachine: (test-preload-590749) Ensuring network mk-test-preload-590749 is active
	I0923 13:41:34.002677  704724 main.go:141] libmachine: (test-preload-590749) Getting domain xml...
	I0923 13:41:34.003455  704724 main.go:141] libmachine: (test-preload-590749) Creating domain...
	I0923 13:41:35.237223  704724 main.go:141] libmachine: (test-preload-590749) Waiting to get IP...
	I0923 13:41:35.238167  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:35.238650  704724 main.go:141] libmachine: (test-preload-590749) DBG | unable to find current IP address of domain test-preload-590749 in network mk-test-preload-590749
	I0923 13:41:35.238738  704724 main.go:141] libmachine: (test-preload-590749) DBG | I0923 13:41:35.238621  704791 retry.go:31] will retry after 265.267085ms: waiting for machine to come up
	I0923 13:41:35.505160  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:35.505545  704724 main.go:141] libmachine: (test-preload-590749) DBG | unable to find current IP address of domain test-preload-590749 in network mk-test-preload-590749
	I0923 13:41:35.505601  704724 main.go:141] libmachine: (test-preload-590749) DBG | I0923 13:41:35.505523  704791 retry.go:31] will retry after 248.382895ms: waiting for machine to come up
	I0923 13:41:35.756180  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:35.756732  704724 main.go:141] libmachine: (test-preload-590749) DBG | unable to find current IP address of domain test-preload-590749 in network mk-test-preload-590749
	I0923 13:41:35.756764  704724 main.go:141] libmachine: (test-preload-590749) DBG | I0923 13:41:35.756682  704791 retry.go:31] will retry after 429.309607ms: waiting for machine to come up
	I0923 13:41:36.187523  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:36.187961  704724 main.go:141] libmachine: (test-preload-590749) DBG | unable to find current IP address of domain test-preload-590749 in network mk-test-preload-590749
	I0923 13:41:36.187988  704724 main.go:141] libmachine: (test-preload-590749) DBG | I0923 13:41:36.187908  704791 retry.go:31] will retry after 373.633459ms: waiting for machine to come up
	I0923 13:41:36.563509  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:36.563925  704724 main.go:141] libmachine: (test-preload-590749) DBG | unable to find current IP address of domain test-preload-590749 in network mk-test-preload-590749
	I0923 13:41:36.563947  704724 main.go:141] libmachine: (test-preload-590749) DBG | I0923 13:41:36.563900  704791 retry.go:31] will retry after 700.061973ms: waiting for machine to come up
	I0923 13:41:37.265916  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:37.266312  704724 main.go:141] libmachine: (test-preload-590749) DBG | unable to find current IP address of domain test-preload-590749 in network mk-test-preload-590749
	I0923 13:41:37.266335  704724 main.go:141] libmachine: (test-preload-590749) DBG | I0923 13:41:37.266262  704791 retry.go:31] will retry after 677.318933ms: waiting for machine to come up
	I0923 13:41:37.944838  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:37.945206  704724 main.go:141] libmachine: (test-preload-590749) DBG | unable to find current IP address of domain test-preload-590749 in network mk-test-preload-590749
	I0923 13:41:37.945230  704724 main.go:141] libmachine: (test-preload-590749) DBG | I0923 13:41:37.945140  704791 retry.go:31] will retry after 752.708142ms: waiting for machine to come up
	I0923 13:41:38.698943  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:38.699545  704724 main.go:141] libmachine: (test-preload-590749) DBG | unable to find current IP address of domain test-preload-590749 in network mk-test-preload-590749
	I0923 13:41:38.699583  704724 main.go:141] libmachine: (test-preload-590749) DBG | I0923 13:41:38.699457  704791 retry.go:31] will retry after 897.482949ms: waiting for machine to come up
	I0923 13:41:39.598160  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:39.598731  704724 main.go:141] libmachine: (test-preload-590749) DBG | unable to find current IP address of domain test-preload-590749 in network mk-test-preload-590749
	I0923 13:41:39.598758  704724 main.go:141] libmachine: (test-preload-590749) DBG | I0923 13:41:39.598681  704791 retry.go:31] will retry after 1.735705494s: waiting for machine to come up
	I0923 13:41:41.335981  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:41.336422  704724 main.go:141] libmachine: (test-preload-590749) DBG | unable to find current IP address of domain test-preload-590749 in network mk-test-preload-590749
	I0923 13:41:41.336475  704724 main.go:141] libmachine: (test-preload-590749) DBG | I0923 13:41:41.336388  704791 retry.go:31] will retry after 1.752514783s: waiting for machine to come up
	I0923 13:41:43.091311  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:43.091793  704724 main.go:141] libmachine: (test-preload-590749) DBG | unable to find current IP address of domain test-preload-590749 in network mk-test-preload-590749
	I0923 13:41:43.091829  704724 main.go:141] libmachine: (test-preload-590749) DBG | I0923 13:41:43.091725  704791 retry.go:31] will retry after 1.94278584s: waiting for machine to come up
	I0923 13:41:45.036630  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:45.037072  704724 main.go:141] libmachine: (test-preload-590749) DBG | unable to find current IP address of domain test-preload-590749 in network mk-test-preload-590749
	I0923 13:41:45.037104  704724 main.go:141] libmachine: (test-preload-590749) DBG | I0923 13:41:45.037010  704791 retry.go:31] will retry after 3.291211137s: waiting for machine to come up
	I0923 13:41:48.332455  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:48.332918  704724 main.go:141] libmachine: (test-preload-590749) DBG | unable to find current IP address of domain test-preload-590749 in network mk-test-preload-590749
	I0923 13:41:48.332954  704724 main.go:141] libmachine: (test-preload-590749) DBG | I0923 13:41:48.332855  704791 retry.go:31] will retry after 3.241765416s: waiting for machine to come up
	I0923 13:41:51.578001  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:51.578624  704724 main.go:141] libmachine: (test-preload-590749) Found IP for machine: 192.168.39.117
	I0923 13:41:51.578647  704724 main.go:141] libmachine: (test-preload-590749) Reserving static IP address...
	I0923 13:41:51.578692  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has current primary IP address 192.168.39.117 and MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:51.579137  704724 main.go:141] libmachine: (test-preload-590749) DBG | found host DHCP lease matching {name: "test-preload-590749", mac: "52:54:00:fc:c7:88", ip: "192.168.39.117"} in network mk-test-preload-590749: {Iface:virbr1 ExpiryTime:2024-09-23 14:41:44 +0000 UTC Type:0 Mac:52:54:00:fc:c7:88 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-590749 Clientid:01:52:54:00:fc:c7:88}
	I0923 13:41:51.579168  704724 main.go:141] libmachine: (test-preload-590749) Reserved static IP address: 192.168.39.117
	I0923 13:41:51.579189  704724 main.go:141] libmachine: (test-preload-590749) DBG | skip adding static IP to network mk-test-preload-590749 - found existing host DHCP lease matching {name: "test-preload-590749", mac: "52:54:00:fc:c7:88", ip: "192.168.39.117"}
	I0923 13:41:51.579204  704724 main.go:141] libmachine: (test-preload-590749) Waiting for SSH to be available...
	I0923 13:41:51.579211  704724 main.go:141] libmachine: (test-preload-590749) DBG | Getting to WaitForSSH function...
	I0923 13:41:51.582529  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:51.583181  704724 main.go:141] libmachine: (test-preload-590749) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:c7:88", ip: ""} in network mk-test-preload-590749: {Iface:virbr1 ExpiryTime:2024-09-23 14:41:44 +0000 UTC Type:0 Mac:52:54:00:fc:c7:88 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-590749 Clientid:01:52:54:00:fc:c7:88}
	I0923 13:41:51.583238  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined IP address 192.168.39.117 and MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:51.583411  704724 main.go:141] libmachine: (test-preload-590749) DBG | Using SSH client type: external
	I0923 13:41:51.583513  704724 main.go:141] libmachine: (test-preload-590749) DBG | Using SSH private key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/test-preload-590749/id_rsa (-rw-------)
	I0923 13:41:51.583602  704724 main.go:141] libmachine: (test-preload-590749) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.117 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19690-662205/.minikube/machines/test-preload-590749/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 13:41:51.583629  704724 main.go:141] libmachine: (test-preload-590749) DBG | About to run SSH command:
	I0923 13:41:51.583649  704724 main.go:141] libmachine: (test-preload-590749) DBG | exit 0
	I0923 13:41:51.709994  704724 main.go:141] libmachine: (test-preload-590749) DBG | SSH cmd err, output: <nil>: 
	I0923 13:41:51.710384  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetConfigRaw
	I0923 13:41:51.711138  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetIP
	I0923 13:41:51.713823  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:51.714205  704724 main.go:141] libmachine: (test-preload-590749) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:c7:88", ip: ""} in network mk-test-preload-590749: {Iface:virbr1 ExpiryTime:2024-09-23 14:41:44 +0000 UTC Type:0 Mac:52:54:00:fc:c7:88 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-590749 Clientid:01:52:54:00:fc:c7:88}
	I0923 13:41:51.714229  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined IP address 192.168.39.117 and MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:51.714503  704724 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/test-preload-590749/config.json ...
	I0923 13:41:51.714703  704724 machine.go:93] provisionDockerMachine start ...
	I0923 13:41:51.714720  704724 main.go:141] libmachine: (test-preload-590749) Calling .DriverName
	I0923 13:41:51.714911  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHHostname
	I0923 13:41:51.717482  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:51.717981  704724 main.go:141] libmachine: (test-preload-590749) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:c7:88", ip: ""} in network mk-test-preload-590749: {Iface:virbr1 ExpiryTime:2024-09-23 14:41:44 +0000 UTC Type:0 Mac:52:54:00:fc:c7:88 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-590749 Clientid:01:52:54:00:fc:c7:88}
	I0923 13:41:51.718012  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined IP address 192.168.39.117 and MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:51.718205  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHPort
	I0923 13:41:51.718396  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHKeyPath
	I0923 13:41:51.718590  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHKeyPath
	I0923 13:41:51.718758  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHUsername
	I0923 13:41:51.718983  704724 main.go:141] libmachine: Using SSH client type: native
	I0923 13:41:51.719207  704724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0923 13:41:51.719220  704724 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 13:41:51.826518  704724 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 13:41:51.826547  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetMachineName
	I0923 13:41:51.826861  704724 buildroot.go:166] provisioning hostname "test-preload-590749"
	I0923 13:41:51.826897  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetMachineName
	I0923 13:41:51.827114  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHHostname
	I0923 13:41:51.830283  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:51.830672  704724 main.go:141] libmachine: (test-preload-590749) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:c7:88", ip: ""} in network mk-test-preload-590749: {Iface:virbr1 ExpiryTime:2024-09-23 14:41:44 +0000 UTC Type:0 Mac:52:54:00:fc:c7:88 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-590749 Clientid:01:52:54:00:fc:c7:88}
	I0923 13:41:51.830708  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined IP address 192.168.39.117 and MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:51.830815  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHPort
	I0923 13:41:51.831012  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHKeyPath
	I0923 13:41:51.831123  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHKeyPath
	I0923 13:41:51.831262  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHUsername
	I0923 13:41:51.831599  704724 main.go:141] libmachine: Using SSH client type: native
	I0923 13:41:51.831805  704724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0923 13:41:51.831819  704724 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-590749 && echo "test-preload-590749" | sudo tee /etc/hostname
	I0923 13:41:51.952208  704724 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-590749
	
	I0923 13:41:51.952254  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHHostname
	I0923 13:41:51.955504  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:51.955868  704724 main.go:141] libmachine: (test-preload-590749) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:c7:88", ip: ""} in network mk-test-preload-590749: {Iface:virbr1 ExpiryTime:2024-09-23 14:41:44 +0000 UTC Type:0 Mac:52:54:00:fc:c7:88 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-590749 Clientid:01:52:54:00:fc:c7:88}
	I0923 13:41:51.955902  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined IP address 192.168.39.117 and MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:51.956039  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHPort
	I0923 13:41:51.956287  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHKeyPath
	I0923 13:41:51.956471  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHKeyPath
	I0923 13:41:51.956668  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHUsername
	I0923 13:41:51.956842  704724 main.go:141] libmachine: Using SSH client type: native
	I0923 13:41:51.957037  704724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0923 13:41:51.957055  704724 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-590749' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-590749/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-590749' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 13:41:52.079783  704724 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 13:41:52.079819  704724 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19690-662205/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-662205/.minikube}
	I0923 13:41:52.079845  704724 buildroot.go:174] setting up certificates
	I0923 13:41:52.079855  704724 provision.go:84] configureAuth start
	I0923 13:41:52.079866  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetMachineName
	I0923 13:41:52.080155  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetIP
	I0923 13:41:52.083816  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:52.084258  704724 main.go:141] libmachine: (test-preload-590749) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:c7:88", ip: ""} in network mk-test-preload-590749: {Iface:virbr1 ExpiryTime:2024-09-23 14:41:44 +0000 UTC Type:0 Mac:52:54:00:fc:c7:88 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-590749 Clientid:01:52:54:00:fc:c7:88}
	I0923 13:41:52.084299  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined IP address 192.168.39.117 and MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:52.084579  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHHostname
	I0923 13:41:52.087088  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:52.087489  704724 main.go:141] libmachine: (test-preload-590749) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:c7:88", ip: ""} in network mk-test-preload-590749: {Iface:virbr1 ExpiryTime:2024-09-23 14:41:44 +0000 UTC Type:0 Mac:52:54:00:fc:c7:88 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-590749 Clientid:01:52:54:00:fc:c7:88}
	I0923 13:41:52.087519  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined IP address 192.168.39.117 and MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:52.087689  704724 provision.go:143] copyHostCerts
	I0923 13:41:52.087749  704724 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem, removing ...
	I0923 13:41:52.087770  704724 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 13:41:52.087838  704724 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem (1082 bytes)
	I0923 13:41:52.087970  704724 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem, removing ...
	I0923 13:41:52.087980  704724 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 13:41:52.088005  704724 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem (1123 bytes)
	I0923 13:41:52.088071  704724 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem, removing ...
	I0923 13:41:52.088079  704724 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 13:41:52.088100  704724 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem (1675 bytes)
	I0923 13:41:52.088150  704724 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem org=jenkins.test-preload-590749 san=[127.0.0.1 192.168.39.117 localhost minikube test-preload-590749]
	I0923 13:41:52.301380  704724 provision.go:177] copyRemoteCerts
	I0923 13:41:52.301460  704724 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 13:41:52.301492  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHHostname
	I0923 13:41:52.304547  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:52.304885  704724 main.go:141] libmachine: (test-preload-590749) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:c7:88", ip: ""} in network mk-test-preload-590749: {Iface:virbr1 ExpiryTime:2024-09-23 14:41:44 +0000 UTC Type:0 Mac:52:54:00:fc:c7:88 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-590749 Clientid:01:52:54:00:fc:c7:88}
	I0923 13:41:52.304916  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined IP address 192.168.39.117 and MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:52.305120  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHPort
	I0923 13:41:52.305381  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHKeyPath
	I0923 13:41:52.305550  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHUsername
	I0923 13:41:52.305705  704724 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/test-preload-590749/id_rsa Username:docker}
	I0923 13:41:52.388115  704724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 13:41:52.412842  704724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0923 13:41:52.436096  704724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 13:41:52.460208  704724 provision.go:87] duration metric: took 380.337268ms to configureAuth
	I0923 13:41:52.460242  704724 buildroot.go:189] setting minikube options for container-runtime
	I0923 13:41:52.460471  704724 config.go:182] Loaded profile config "test-preload-590749": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0923 13:41:52.460619  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHHostname
	I0923 13:41:52.463232  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:52.463573  704724 main.go:141] libmachine: (test-preload-590749) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:c7:88", ip: ""} in network mk-test-preload-590749: {Iface:virbr1 ExpiryTime:2024-09-23 14:41:44 +0000 UTC Type:0 Mac:52:54:00:fc:c7:88 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-590749 Clientid:01:52:54:00:fc:c7:88}
	I0923 13:41:52.463633  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined IP address 192.168.39.117 and MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:52.463737  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHPort
	I0923 13:41:52.463944  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHKeyPath
	I0923 13:41:52.464147  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHKeyPath
	I0923 13:41:52.464342  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHUsername
	I0923 13:41:52.464605  704724 main.go:141] libmachine: Using SSH client type: native
	I0923 13:41:52.464798  704724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0923 13:41:52.464816  704724 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 13:41:52.700987  704724 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 13:41:52.701018  704724 machine.go:96] duration metric: took 986.302151ms to provisionDockerMachine
	I0923 13:41:52.701030  704724 start.go:293] postStartSetup for "test-preload-590749" (driver="kvm2")
	I0923 13:41:52.701040  704724 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 13:41:52.701057  704724 main.go:141] libmachine: (test-preload-590749) Calling .DriverName
	I0923 13:41:52.701388  704724 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 13:41:52.701421  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHHostname
	I0923 13:41:52.704664  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:52.705025  704724 main.go:141] libmachine: (test-preload-590749) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:c7:88", ip: ""} in network mk-test-preload-590749: {Iface:virbr1 ExpiryTime:2024-09-23 14:41:44 +0000 UTC Type:0 Mac:52:54:00:fc:c7:88 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-590749 Clientid:01:52:54:00:fc:c7:88}
	I0923 13:41:52.705048  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined IP address 192.168.39.117 and MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:52.705286  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHPort
	I0923 13:41:52.705502  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHKeyPath
	I0923 13:41:52.705681  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHUsername
	I0923 13:41:52.705811  704724 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/test-preload-590749/id_rsa Username:docker}
	I0923 13:41:52.788924  704724 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 13:41:52.793455  704724 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 13:41:52.793489  704724 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/addons for local assets ...
	I0923 13:41:52.793586  704724 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/files for local assets ...
	I0923 13:41:52.793683  704724 filesync.go:149] local asset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> 6694472.pem in /etc/ssl/certs
	I0923 13:41:52.793801  704724 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 13:41:52.803876  704724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 13:41:52.830200  704724 start.go:296] duration metric: took 129.155575ms for postStartSetup
	I0923 13:41:52.830251  704724 fix.go:56] duration metric: took 18.852766691s for fixHost
	I0923 13:41:52.830284  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHHostname
	I0923 13:41:52.832968  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:52.833381  704724 main.go:141] libmachine: (test-preload-590749) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:c7:88", ip: ""} in network mk-test-preload-590749: {Iface:virbr1 ExpiryTime:2024-09-23 14:41:44 +0000 UTC Type:0 Mac:52:54:00:fc:c7:88 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-590749 Clientid:01:52:54:00:fc:c7:88}
	I0923 13:41:52.833416  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined IP address 192.168.39.117 and MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:52.833556  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHPort
	I0923 13:41:52.833784  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHKeyPath
	I0923 13:41:52.833964  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHKeyPath
	I0923 13:41:52.834108  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHUsername
	I0923 13:41:52.834309  704724 main.go:141] libmachine: Using SSH client type: native
	I0923 13:41:52.834527  704724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0923 13:41:52.834542  704724 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 13:41:52.943008  704724 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727098912.915939091
	
	I0923 13:41:52.943035  704724 fix.go:216] guest clock: 1727098912.915939091
	I0923 13:41:52.943044  704724 fix.go:229] Guest: 2024-09-23 13:41:52.915939091 +0000 UTC Remote: 2024-09-23 13:41:52.830257638 +0000 UTC m=+31.287394694 (delta=85.681453ms)
	I0923 13:41:52.943085  704724 fix.go:200] guest clock delta is within tolerance: 85.681453ms
	I0923 13:41:52.943090  704724 start.go:83] releasing machines lock for "test-preload-590749", held for 18.965622764s
	I0923 13:41:52.943112  704724 main.go:141] libmachine: (test-preload-590749) Calling .DriverName
	I0923 13:41:52.943400  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetIP
	I0923 13:41:52.946600  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:52.946972  704724 main.go:141] libmachine: (test-preload-590749) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:c7:88", ip: ""} in network mk-test-preload-590749: {Iface:virbr1 ExpiryTime:2024-09-23 14:41:44 +0000 UTC Type:0 Mac:52:54:00:fc:c7:88 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-590749 Clientid:01:52:54:00:fc:c7:88}
	I0923 13:41:52.946995  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined IP address 192.168.39.117 and MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:52.947168  704724 main.go:141] libmachine: (test-preload-590749) Calling .DriverName
	I0923 13:41:52.947854  704724 main.go:141] libmachine: (test-preload-590749) Calling .DriverName
	I0923 13:41:52.948092  704724 main.go:141] libmachine: (test-preload-590749) Calling .DriverName
	I0923 13:41:52.948191  704724 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 13:41:52.948266  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHHostname
	I0923 13:41:52.948325  704724 ssh_runner.go:195] Run: cat /version.json
	I0923 13:41:52.948347  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHHostname
	I0923 13:41:52.951181  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:52.951464  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:52.951712  704724 main.go:141] libmachine: (test-preload-590749) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:c7:88", ip: ""} in network mk-test-preload-590749: {Iface:virbr1 ExpiryTime:2024-09-23 14:41:44 +0000 UTC Type:0 Mac:52:54:00:fc:c7:88 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-590749 Clientid:01:52:54:00:fc:c7:88}
	I0923 13:41:52.951742  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined IP address 192.168.39.117 and MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:52.951842  704724 main.go:141] libmachine: (test-preload-590749) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:c7:88", ip: ""} in network mk-test-preload-590749: {Iface:virbr1 ExpiryTime:2024-09-23 14:41:44 +0000 UTC Type:0 Mac:52:54:00:fc:c7:88 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-590749 Clientid:01:52:54:00:fc:c7:88}
	I0923 13:41:52.951874  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined IP address 192.168.39.117 and MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:52.951898  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHPort
	I0923 13:41:52.952031  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHPort
	I0923 13:41:52.952102  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHKeyPath
	I0923 13:41:52.952165  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHKeyPath
	I0923 13:41:52.952299  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHUsername
	I0923 13:41:52.952351  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHUsername
	I0923 13:41:52.952401  704724 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/test-preload-590749/id_rsa Username:docker}
	I0923 13:41:52.952456  704724 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/test-preload-590749/id_rsa Username:docker}
	I0923 13:41:53.064531  704724 ssh_runner.go:195] Run: systemctl --version
	I0923 13:41:53.071132  704724 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 13:41:53.215232  704724 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 13:41:53.221326  704724 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 13:41:53.221406  704724 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:41:53.237103  704724 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 13:41:53.237141  704724 start.go:495] detecting cgroup driver to use...
	I0923 13:41:53.237226  704724 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 13:41:53.252822  704724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:41:53.268246  704724 docker.go:217] disabling cri-docker service (if available) ...
	I0923 13:41:53.268321  704724 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 13:41:53.283108  704724 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 13:41:53.297976  704724 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 13:41:53.414128  704724 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 13:41:53.548758  704724 docker.go:233] disabling docker service ...
	I0923 13:41:53.548839  704724 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 13:41:53.563364  704724 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 13:41:53.577917  704724 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 13:41:53.713968  704724 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 13:41:53.826948  704724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 13:41:53.841327  704724 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:41:53.860536  704724 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0923 13:41:53.860600  704724 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:41:53.871377  704724 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 13:41:53.871461  704724 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:41:53.881563  704724 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:41:53.891903  704724 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:41:53.902010  704724 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 13:41:53.911831  704724 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:41:53.921738  704724 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:41:53.939327  704724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:41:53.949949  704724 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 13:41:53.959958  704724 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 13:41:53.960028  704724 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 13:41:53.974218  704724 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 13:41:53.983893  704724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:41:54.092016  704724 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 13:41:54.181896  704724 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 13:41:54.181980  704724 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 13:41:54.186662  704724 start.go:563] Will wait 60s for crictl version
	I0923 13:41:54.186725  704724 ssh_runner.go:195] Run: which crictl
	I0923 13:41:54.190469  704724 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 13:41:54.226888  704724 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 13:41:54.226979  704724 ssh_runner.go:195] Run: crio --version
	I0923 13:41:54.255426  704724 ssh_runner.go:195] Run: crio --version
	I0923 13:41:54.287596  704724 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0923 13:41:54.289050  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetIP
	I0923 13:41:54.291843  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:54.292190  704724 main.go:141] libmachine: (test-preload-590749) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:c7:88", ip: ""} in network mk-test-preload-590749: {Iface:virbr1 ExpiryTime:2024-09-23 14:41:44 +0000 UTC Type:0 Mac:52:54:00:fc:c7:88 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-590749 Clientid:01:52:54:00:fc:c7:88}
	I0923 13:41:54.292223  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined IP address 192.168.39.117 and MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:41:54.292429  704724 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 13:41:54.296348  704724 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:41:54.308293  704724 kubeadm.go:883] updating cluster {Name:test-preload-590749 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-590749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 13:41:54.308415  704724 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0923 13:41:54.308459  704724 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 13:41:54.342268  704724 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0923 13:41:54.342333  704724 ssh_runner.go:195] Run: which lz4
	I0923 13:41:54.346463  704724 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 13:41:54.350398  704724 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 13:41:54.350433  704724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0923 13:41:55.890010  704724 crio.go:462] duration metric: took 1.54360958s to copy over tarball
	I0923 13:41:55.890110  704724 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 13:41:58.414869  704724 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.524722401s)
	I0923 13:41:58.414901  704724 crio.go:469] duration metric: took 2.524853148s to extract the tarball
	I0923 13:41:58.414909  704724 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 13:41:58.456581  704724 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 13:41:58.500610  704724 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0923 13:41:58.500638  704724 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0923 13:41:58.500732  704724 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 13:41:58.500770  704724 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0923 13:41:58.500772  704724 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0923 13:41:58.500733  704724 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0923 13:41:58.500801  704724 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0923 13:41:58.500772  704724 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0923 13:41:58.500838  704724 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 13:41:58.500865  704724 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0923 13:41:58.502377  704724 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0923 13:41:58.502402  704724 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0923 13:41:58.502406  704724 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0923 13:41:58.502416  704724 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0923 13:41:58.502413  704724 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 13:41:58.502488  704724 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0923 13:41:58.502488  704724 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 13:41:58.502410  704724 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0923 13:41:58.794129  704724 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0923 13:41:58.800951  704724 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0923 13:41:58.803842  704724 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0923 13:41:58.812057  704724 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0923 13:41:58.814884  704724 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0923 13:41:58.825520  704724 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0923 13:41:58.860583  704724 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0923 13:41:58.860639  704724 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 13:41:58.860687  704724 ssh_runner.go:195] Run: which crictl
	I0923 13:41:58.893913  704724 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0923 13:41:58.931641  704724 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0923 13:41:58.931697  704724 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0923 13:41:58.931740  704724 ssh_runner.go:195] Run: which crictl
	I0923 13:41:58.974995  704724 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0923 13:41:58.975052  704724 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0923 13:41:58.975062  704724 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0923 13:41:58.975085  704724 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0923 13:41:58.975105  704724 ssh_runner.go:195] Run: which crictl
	I0923 13:41:58.975110  704724 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0923 13:41:58.975119  704724 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0923 13:41:58.975166  704724 ssh_runner.go:195] Run: which crictl
	I0923 13:41:58.975168  704724 ssh_runner.go:195] Run: which crictl
	I0923 13:41:58.975263  704724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0923 13:41:58.975230  704724 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0923 13:41:58.975315  704724 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0923 13:41:58.975368  704724 ssh_runner.go:195] Run: which crictl
	I0923 13:41:59.002355  704724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0923 13:41:59.002399  704724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0923 13:41:59.002421  704724 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0923 13:41:59.002463  704724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0923 13:41:59.002466  704724 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0923 13:41:59.002525  704724 ssh_runner.go:195] Run: which crictl
	I0923 13:41:59.002582  704724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0923 13:41:59.002606  704724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0923 13:41:59.036448  704724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0923 13:41:59.146823  704724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0923 13:41:59.146869  704724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0923 13:41:59.146966  704724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0923 13:41:59.147005  704724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0923 13:41:59.147060  704724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0923 13:41:59.147154  704724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0923 13:41:59.152589  704724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0923 13:41:59.292822  704724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0923 13:41:59.322620  704724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0923 13:41:59.322679  704724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0923 13:41:59.322712  704724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0923 13:41:59.322771  704724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0923 13:41:59.322808  704724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0923 13:41:59.322889  704724 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0923 13:41:59.322987  704724 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0923 13:41:59.379785  704724 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0923 13:41:59.379895  704724 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0923 13:41:59.453734  704724 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0923 13:41:59.453817  704724 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0923 13:41:59.453864  704724 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0923 13:41:59.453900  704724 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0923 13:41:59.453926  704724 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0923 13:41:59.453978  704724 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0923 13:41:59.459462  704724 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0923 13:41:59.459595  704724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0923 13:41:59.459619  704724 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0923 13:41:59.459640  704724 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0923 13:41:59.459666  704724 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0923 13:41:59.459680  704724 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0923 13:41:59.459677  704724 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0923 13:41:59.463538  704724 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0923 13:41:59.463580  704724 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0923 13:41:59.465365  704724 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0923 13:41:59.514716  704724 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0923 13:41:59.514841  704724 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0923 13:41:59.655543  704724 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 13:42:02.051632  704724 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6: (2.59191935s)
	I0923 13:42:02.051685  704724 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19690-662205/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0923 13:42:02.051705  704724 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.592007845s)
	I0923 13:42:02.051717  704724 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0923 13:42:02.051739  704724 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0923 13:42:02.051765  704724 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.536902348s)
	I0923 13:42:02.051780  704724 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.396205443s)
	I0923 13:42:02.051791  704724 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0923 13:42:02.051797  704724 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0923 13:42:02.203451  704724 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19690-662205/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0923 13:42:02.203496  704724 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0923 13:42:02.203555  704724 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0923 13:42:03.053931  704724 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19690-662205/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0923 13:42:03.053991  704724 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0923 13:42:03.054054  704724 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0923 13:42:05.304206  704724 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.250120083s)
	I0923 13:42:05.304244  704724 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19690-662205/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0923 13:42:05.304281  704724 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0923 13:42:05.304343  704724 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0923 13:42:06.048126  704724 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19690-662205/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0923 13:42:06.048178  704724 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0923 13:42:06.048235  704724 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0923 13:42:06.793172  704724 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19690-662205/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0923 13:42:06.793223  704724 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0923 13:42:06.793291  704724 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0923 13:42:07.240235  704724 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19690-662205/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0923 13:42:07.240319  704724 cache_images.go:123] Successfully loaded all cached images
	I0923 13:42:07.240327  704724 cache_images.go:92] duration metric: took 8.739675589s to LoadCachedImages
	I0923 13:42:07.240344  704724 kubeadm.go:934] updating node { 192.168.39.117 8443 v1.24.4 crio true true} ...
	I0923 13:42:07.240463  704724 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-590749 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.117
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-590749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 13:42:07.240574  704724 ssh_runner.go:195] Run: crio config
	I0923 13:42:07.287597  704724 cni.go:84] Creating CNI manager for ""
	I0923 13:42:07.287662  704724 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 13:42:07.287674  704724 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 13:42:07.287694  704724 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.117 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-590749 NodeName:test-preload-590749 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.117"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.117 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 13:42:07.287840  704724 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.117
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-590749"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.117
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.117"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 13:42:07.287907  704724 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0923 13:42:07.298126  704724 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 13:42:07.298196  704724 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 13:42:07.308212  704724 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0923 13:42:07.325266  704724 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 13:42:07.341911  704724 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0923 13:42:07.359295  704724 ssh_runner.go:195] Run: grep 192.168.39.117	control-plane.minikube.internal$ /etc/hosts
	I0923 13:42:07.363202  704724 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.117	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:42:07.376183  704724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:42:07.499944  704724 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:42:07.516137  704724 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/test-preload-590749 for IP: 192.168.39.117
	I0923 13:42:07.516170  704724 certs.go:194] generating shared ca certs ...
	I0923 13:42:07.516195  704724 certs.go:226] acquiring lock for ca certs: {Name:mk5f47b34d40554f07f6507fea971236e4735d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:42:07.516422  704724 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key
	I0923 13:42:07.516481  704724 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key
	I0923 13:42:07.516505  704724 certs.go:256] generating profile certs ...
	I0923 13:42:07.516643  704724 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/test-preload-590749/client.key
	I0923 13:42:07.516709  704724 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/test-preload-590749/apiserver.key.f9ba092e
	I0923 13:42:07.516821  704724 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/test-preload-590749/proxy-client.key
	I0923 13:42:07.517023  704724 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem (1338 bytes)
	W0923 13:42:07.517073  704724 certs.go:480] ignoring /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447_empty.pem, impossibly tiny 0 bytes
	I0923 13:42:07.517087  704724 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 13:42:07.517121  704724 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem (1082 bytes)
	I0923 13:42:07.517156  704724 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem (1123 bytes)
	I0923 13:42:07.517189  704724 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem (1675 bytes)
	I0923 13:42:07.517244  704724 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 13:42:07.518362  704724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 13:42:07.559646  704724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 13:42:07.588131  704724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 13:42:07.614578  704724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 13:42:07.641278  704724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/test-preload-590749/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0923 13:42:07.667477  704724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/test-preload-590749/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 13:42:07.695700  704724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/test-preload-590749/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 13:42:07.730673  704724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/test-preload-590749/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0923 13:42:07.754607  704724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 13:42:07.777919  704724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem --> /usr/share/ca-certificates/669447.pem (1338 bytes)
	I0923 13:42:07.802161  704724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /usr/share/ca-certificates/6694472.pem (1708 bytes)
	I0923 13:42:07.827100  704724 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 13:42:07.845047  704724 ssh_runner.go:195] Run: openssl version
	I0923 13:42:07.851002  704724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 13:42:07.862214  704724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:42:07.867179  704724 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 12:28 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:42:07.867253  704724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:42:07.873022  704724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 13:42:07.884530  704724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669447.pem && ln -fs /usr/share/ca-certificates/669447.pem /etc/ssl/certs/669447.pem"
	I0923 13:42:07.895739  704724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669447.pem
	I0923 13:42:07.900166  704724 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 12:47 /usr/share/ca-certificates/669447.pem
	I0923 13:42:07.900230  704724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669447.pem
	I0923 13:42:07.906229  704724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/669447.pem /etc/ssl/certs/51391683.0"
	I0923 13:42:07.917480  704724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6694472.pem && ln -fs /usr/share/ca-certificates/6694472.pem /etc/ssl/certs/6694472.pem"
	I0923 13:42:07.928909  704724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6694472.pem
	I0923 13:42:07.933583  704724 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 12:47 /usr/share/ca-certificates/6694472.pem
	I0923 13:42:07.933661  704724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6694472.pem
	I0923 13:42:07.939524  704724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6694472.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 13:42:07.951677  704724 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 13:42:07.956628  704724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 13:42:07.963517  704724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 13:42:07.969909  704724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 13:42:07.976819  704724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 13:42:07.983102  704724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 13:42:07.989849  704724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 13:42:07.996381  704724 kubeadm.go:392] StartCluster: {Name:test-preload-590749 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-590749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:42:07.996511  704724 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 13:42:07.996580  704724 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 13:42:08.038109  704724 cri.go:89] found id: ""
	I0923 13:42:08.038225  704724 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 13:42:08.049139  704724 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0923 13:42:08.049167  704724 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0923 13:42:08.049217  704724 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0923 13:42:08.059550  704724 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0923 13:42:08.060113  704724 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-590749" does not appear in /home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 13:42:08.060257  704724 kubeconfig.go:62] /home/jenkins/minikube-integration/19690-662205/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-590749" cluster setting kubeconfig missing "test-preload-590749" context setting]
	I0923 13:42:08.060551  704724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/kubeconfig: {Name:mk213d38080414fbe499f6509d2653fd99103348 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:42:08.061217  704724 kapi.go:59] client config for test-preload-590749: &rest.Config{Host:"https://192.168.39.117:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/profiles/test-preload-590749/client.crt", KeyFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/profiles/test-preload-590749/client.key", CAFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 13:42:08.061963  704724 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0923 13:42:08.072654  704724 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.117
	I0923 13:42:08.072693  704724 kubeadm.go:1160] stopping kube-system containers ...
	I0923 13:42:08.072705  704724 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0923 13:42:08.072755  704724 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 13:42:08.109796  704724 cri.go:89] found id: ""
	I0923 13:42:08.109923  704724 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0923 13:42:08.127518  704724 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 13:42:08.137858  704724 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 13:42:08.137880  704724 kubeadm.go:157] found existing configuration files:
	
	I0923 13:42:08.137933  704724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 13:42:08.147930  704724 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 13:42:08.148006  704724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 13:42:08.158135  704724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 13:42:08.167723  704724 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 13:42:08.167789  704724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 13:42:08.186108  704724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 13:42:08.195678  704724 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 13:42:08.195752  704724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 13:42:08.205657  704724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 13:42:08.215378  704724 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 13:42:08.215447  704724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 13:42:08.225390  704724 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 13:42:08.235659  704724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 13:42:08.344400  704724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 13:42:09.027545  704724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0923 13:42:09.286379  704724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 13:42:09.344980  704724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0923 13:42:09.418605  704724 api_server.go:52] waiting for apiserver process to appear ...
	I0923 13:42:09.418817  704724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:42:09.919065  704724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:42:10.419088  704724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:42:10.433872  704724 api_server.go:72] duration metric: took 1.01526911s to wait for apiserver process to appear ...
	I0923 13:42:10.433903  704724 api_server.go:88] waiting for apiserver healthz status ...
	I0923 13:42:10.433926  704724 api_server.go:253] Checking apiserver healthz at https://192.168.39.117:8443/healthz ...
	I0923 13:42:10.434456  704724 api_server.go:269] stopped: https://192.168.39.117:8443/healthz: Get "https://192.168.39.117:8443/healthz": dial tcp 192.168.39.117:8443: connect: connection refused
	I0923 13:42:10.934324  704724 api_server.go:253] Checking apiserver healthz at https://192.168.39.117:8443/healthz ...
	I0923 13:42:14.527072  704724 api_server.go:279] https://192.168.39.117:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0923 13:42:14.527107  704724 api_server.go:103] status: https://192.168.39.117:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0923 13:42:14.527122  704724 api_server.go:253] Checking apiserver healthz at https://192.168.39.117:8443/healthz ...
	I0923 13:42:14.597575  704724 api_server.go:279] https://192.168.39.117:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0923 13:42:14.597630  704724 api_server.go:103] status: https://192.168.39.117:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0923 13:42:14.933991  704724 api_server.go:253] Checking apiserver healthz at https://192.168.39.117:8443/healthz ...
	I0923 13:42:14.940257  704724 api_server.go:279] https://192.168.39.117:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:42:14.940292  704724 api_server.go:103] status: https://192.168.39.117:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:42:15.434951  704724 api_server.go:253] Checking apiserver healthz at https://192.168.39.117:8443/healthz ...
	I0923 13:42:15.442499  704724 api_server.go:279] https://192.168.39.117:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:42:15.442534  704724 api_server.go:103] status: https://192.168.39.117:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:42:15.934076  704724 api_server.go:253] Checking apiserver healthz at https://192.168.39.117:8443/healthz ...
	I0923 13:42:15.939953  704724 api_server.go:279] https://192.168.39.117:8443/healthz returned 200:
	ok
	I0923 13:42:15.947860  704724 api_server.go:141] control plane version: v1.24.4
	I0923 13:42:15.947902  704724 api_server.go:131] duration metric: took 5.51399198s to wait for apiserver health ...
	I0923 13:42:15.947912  704724 cni.go:84] Creating CNI manager for ""
	I0923 13:42:15.947919  704724 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 13:42:15.949744  704724 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 13:42:15.951315  704724 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 13:42:15.962549  704724 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 13:42:15.980685  704724 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 13:42:15.980806  704724 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 13:42:15.980830  704724 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 13:42:15.992730  704724 system_pods.go:59] 8 kube-system pods found
	I0923 13:42:15.992776  704724 system_pods.go:61] "coredns-6d4b75cb6d-2jjh4" [03310569-5e4c-4f81-bb9a-d119f6596262] Running
	I0923 13:42:15.992784  704724 system_pods.go:61] "coredns-6d4b75cb6d-wb8w6" [932e91cd-07e7-4cc9-9001-35e7477e759c] Running
	I0923 13:42:15.992790  704724 system_pods.go:61] "etcd-test-preload-590749" [fa7552c2-c2ef-41dd-b2c7-d2c5e5c1d1e1] Running
	I0923 13:42:15.992801  704724 system_pods.go:61] "kube-apiserver-test-preload-590749" [80de45e8-af49-4f6f-aeb0-3c20a055c2b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0923 13:42:15.992810  704724 system_pods.go:61] "kube-controller-manager-test-preload-590749" [b4b3c9bf-e0b9-47cf-a59c-7d896bf588c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0923 13:42:15.992844  704724 system_pods.go:61] "kube-proxy-mwmhn" [06ed7780-2005-4f95-b6d6-12e7245599f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0923 13:42:15.992854  704724 system_pods.go:61] "kube-scheduler-test-preload-590749" [abdfd948-d7f1-47a5-add8-d1f308b40cfb] Running
	I0923 13:42:15.992865  704724 system_pods.go:61] "storage-provisioner" [6d1909f4-f774-470d-a5e1-202c80703d0a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0923 13:42:15.992881  704724 system_pods.go:74] duration metric: took 12.16785ms to wait for pod list to return data ...
	I0923 13:42:15.992896  704724 node_conditions.go:102] verifying NodePressure condition ...
	I0923 13:42:15.996710  704724 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:42:15.996743  704724 node_conditions.go:123] node cpu capacity is 2
	I0923 13:42:15.996758  704724 node_conditions.go:105] duration metric: took 3.853881ms to run NodePressure ...
	I0923 13:42:15.996780  704724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 13:42:16.176110  704724 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0923 13:42:16.180162  704724 kubeadm.go:739] kubelet initialised
	I0923 13:42:16.180184  704724 kubeadm.go:740] duration metric: took 4.042341ms waiting for restarted kubelet to initialise ...
	I0923 13:42:16.180192  704724 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:42:16.184983  704724 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-2jjh4" in "kube-system" namespace to be "Ready" ...
	I0923 13:42:16.190055  704724 pod_ready.go:98] node "test-preload-590749" hosting pod "coredns-6d4b75cb6d-2jjh4" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-590749" has status "Ready":"False"
	I0923 13:42:16.190080  704724 pod_ready.go:82] duration metric: took 5.070926ms for pod "coredns-6d4b75cb6d-2jjh4" in "kube-system" namespace to be "Ready" ...
	E0923 13:42:16.190089  704724 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-590749" hosting pod "coredns-6d4b75cb6d-2jjh4" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-590749" has status "Ready":"False"
	I0923 13:42:16.190096  704724 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-wb8w6" in "kube-system" namespace to be "Ready" ...
	I0923 13:42:16.194623  704724 pod_ready.go:98] node "test-preload-590749" hosting pod "coredns-6d4b75cb6d-wb8w6" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-590749" has status "Ready":"False"
	I0923 13:42:16.194653  704724 pod_ready.go:82] duration metric: took 4.547411ms for pod "coredns-6d4b75cb6d-wb8w6" in "kube-system" namespace to be "Ready" ...
	E0923 13:42:16.194664  704724 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-590749" hosting pod "coredns-6d4b75cb6d-wb8w6" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-590749" has status "Ready":"False"
	I0923 13:42:16.194673  704724 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-590749" in "kube-system" namespace to be "Ready" ...
	I0923 13:42:16.199159  704724 pod_ready.go:98] node "test-preload-590749" hosting pod "etcd-test-preload-590749" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-590749" has status "Ready":"False"
	I0923 13:42:16.199185  704724 pod_ready.go:82] duration metric: took 4.502229ms for pod "etcd-test-preload-590749" in "kube-system" namespace to be "Ready" ...
	E0923 13:42:16.199196  704724 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-590749" hosting pod "etcd-test-preload-590749" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-590749" has status "Ready":"False"
	I0923 13:42:16.199205  704724 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-590749" in "kube-system" namespace to be "Ready" ...
	I0923 13:42:16.384520  704724 pod_ready.go:98] node "test-preload-590749" hosting pod "kube-apiserver-test-preload-590749" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-590749" has status "Ready":"False"
	I0923 13:42:16.384549  704724 pod_ready.go:82] duration metric: took 185.334282ms for pod "kube-apiserver-test-preload-590749" in "kube-system" namespace to be "Ready" ...
	E0923 13:42:16.384559  704724 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-590749" hosting pod "kube-apiserver-test-preload-590749" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-590749" has status "Ready":"False"
	I0923 13:42:16.384566  704724 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-590749" in "kube-system" namespace to be "Ready" ...
	I0923 13:42:16.785592  704724 pod_ready.go:98] node "test-preload-590749" hosting pod "kube-controller-manager-test-preload-590749" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-590749" has status "Ready":"False"
	I0923 13:42:16.785625  704724 pod_ready.go:82] duration metric: took 401.043981ms for pod "kube-controller-manager-test-preload-590749" in "kube-system" namespace to be "Ready" ...
	E0923 13:42:16.785635  704724 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-590749" hosting pod "kube-controller-manager-test-preload-590749" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-590749" has status "Ready":"False"
	I0923 13:42:16.785642  704724 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-mwmhn" in "kube-system" namespace to be "Ready" ...
	I0923 13:42:17.185069  704724 pod_ready.go:98] node "test-preload-590749" hosting pod "kube-proxy-mwmhn" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-590749" has status "Ready":"False"
	I0923 13:42:17.185101  704724 pod_ready.go:82] duration metric: took 399.44846ms for pod "kube-proxy-mwmhn" in "kube-system" namespace to be "Ready" ...
	E0923 13:42:17.185110  704724 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-590749" hosting pod "kube-proxy-mwmhn" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-590749" has status "Ready":"False"
	I0923 13:42:17.185115  704724 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-590749" in "kube-system" namespace to be "Ready" ...
	I0923 13:42:17.585322  704724 pod_ready.go:98] node "test-preload-590749" hosting pod "kube-scheduler-test-preload-590749" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-590749" has status "Ready":"False"
	I0923 13:42:17.585351  704724 pod_ready.go:82] duration metric: took 400.228768ms for pod "kube-scheduler-test-preload-590749" in "kube-system" namespace to be "Ready" ...
	E0923 13:42:17.585361  704724 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-590749" hosting pod "kube-scheduler-test-preload-590749" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-590749" has status "Ready":"False"
	I0923 13:42:17.585368  704724 pod_ready.go:39] duration metric: took 1.40516683s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:42:17.585393  704724 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 13:42:17.597358  704724 ops.go:34] apiserver oom_adj: -16
	I0923 13:42:17.597389  704724 kubeadm.go:597] duration metric: took 9.548214315s to restartPrimaryControlPlane
	I0923 13:42:17.597438  704724 kubeadm.go:394] duration metric: took 9.601038571s to StartCluster
	I0923 13:42:17.597463  704724 settings.go:142] acquiring lock: {Name:mk3da09e51125fc906a9e1276ab490fc7b26b03f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:42:17.597569  704724 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 13:42:17.598624  704724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/kubeconfig: {Name:mk213d38080414fbe499f6509d2653fd99103348 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:42:17.599014  704724 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.117 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 13:42:17.599058  704724 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 13:42:17.599163  704724 addons.go:69] Setting storage-provisioner=true in profile "test-preload-590749"
	I0923 13:42:17.599187  704724 addons.go:234] Setting addon storage-provisioner=true in "test-preload-590749"
	W0923 13:42:17.599195  704724 addons.go:243] addon storage-provisioner should already be in state true
	I0923 13:42:17.599192  704724 addons.go:69] Setting default-storageclass=true in profile "test-preload-590749"
	I0923 13:42:17.599221  704724 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-590749"
	I0923 13:42:17.599229  704724 host.go:66] Checking if "test-preload-590749" exists ...
	I0923 13:42:17.599253  704724 config.go:182] Loaded profile config "test-preload-590749": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0923 13:42:17.599581  704724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:42:17.599626  704724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:42:17.599621  704724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:42:17.599665  704724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:42:17.600839  704724 out.go:177] * Verifying Kubernetes components...
	I0923 13:42:17.602801  704724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:42:17.615341  704724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45575
	I0923 13:42:17.615908  704724 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:42:17.616504  704724 main.go:141] libmachine: Using API Version  1
	I0923 13:42:17.616530  704724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:42:17.616879  704724 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:42:17.617494  704724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:42:17.617546  704724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:42:17.619737  704724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46107
	I0923 13:42:17.620211  704724 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:42:17.620748  704724 main.go:141] libmachine: Using API Version  1
	I0923 13:42:17.620775  704724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:42:17.621134  704724 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:42:17.621332  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetState
	I0923 13:42:17.623771  704724 kapi.go:59] client config for test-preload-590749: &rest.Config{Host:"https://192.168.39.117:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/profiles/test-preload-590749/client.crt", KeyFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/profiles/test-preload-590749/client.key", CAFile:"/home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 13:42:17.624145  704724 addons.go:234] Setting addon default-storageclass=true in "test-preload-590749"
	W0923 13:42:17.624170  704724 addons.go:243] addon default-storageclass should already be in state true
	I0923 13:42:17.624212  704724 host.go:66] Checking if "test-preload-590749" exists ...
	I0923 13:42:17.624613  704724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:42:17.624663  704724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:42:17.636674  704724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45809
	I0923 13:42:17.637240  704724 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:42:17.637862  704724 main.go:141] libmachine: Using API Version  1
	I0923 13:42:17.637892  704724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:42:17.638411  704724 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:42:17.638655  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetState
	I0923 13:42:17.640575  704724 main.go:141] libmachine: (test-preload-590749) Calling .DriverName
	I0923 13:42:17.641049  704724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38821
	I0923 13:42:17.641558  704724 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:42:17.642109  704724 main.go:141] libmachine: Using API Version  1
	I0923 13:42:17.642132  704724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:42:17.642677  704724 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:42:17.643030  704724 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 13:42:17.643193  704724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:42:17.643237  704724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:42:17.644624  704724 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 13:42:17.644651  704724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 13:42:17.644672  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHHostname
	I0923 13:42:17.647459  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:42:17.647946  704724 main.go:141] libmachine: (test-preload-590749) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:c7:88", ip: ""} in network mk-test-preload-590749: {Iface:virbr1 ExpiryTime:2024-09-23 14:41:44 +0000 UTC Type:0 Mac:52:54:00:fc:c7:88 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-590749 Clientid:01:52:54:00:fc:c7:88}
	I0923 13:42:17.647978  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined IP address 192.168.39.117 and MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:42:17.648221  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHPort
	I0923 13:42:17.648417  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHKeyPath
	I0923 13:42:17.648577  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHUsername
	I0923 13:42:17.648756  704724 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/test-preload-590749/id_rsa Username:docker}
	I0923 13:42:17.682470  704724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44365
	I0923 13:42:17.682975  704724 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:42:17.683552  704724 main.go:141] libmachine: Using API Version  1
	I0923 13:42:17.683584  704724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:42:17.683979  704724 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:42:17.684229  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetState
	I0923 13:42:17.686059  704724 main.go:141] libmachine: (test-preload-590749) Calling .DriverName
	I0923 13:42:17.686655  704724 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 13:42:17.686675  704724 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 13:42:17.686694  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHHostname
	I0923 13:42:17.689655  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:42:17.690148  704724 main.go:141] libmachine: (test-preload-590749) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:c7:88", ip: ""} in network mk-test-preload-590749: {Iface:virbr1 ExpiryTime:2024-09-23 14:41:44 +0000 UTC Type:0 Mac:52:54:00:fc:c7:88 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:test-preload-590749 Clientid:01:52:54:00:fc:c7:88}
	I0923 13:42:17.690180  704724 main.go:141] libmachine: (test-preload-590749) DBG | domain test-preload-590749 has defined IP address 192.168.39.117 and MAC address 52:54:00:fc:c7:88 in network mk-test-preload-590749
	I0923 13:42:17.690332  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHPort
	I0923 13:42:17.690577  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHKeyPath
	I0923 13:42:17.690752  704724 main.go:141] libmachine: (test-preload-590749) Calling .GetSSHUsername
	I0923 13:42:17.690923  704724 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/test-preload-590749/id_rsa Username:docker}
	I0923 13:42:17.778753  704724 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:42:17.796458  704724 node_ready.go:35] waiting up to 6m0s for node "test-preload-590749" to be "Ready" ...
	I0923 13:42:17.873451  704724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 13:42:17.923785  704724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 13:42:18.888174  704724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.014671592s)
	I0923 13:42:18.888258  704724 main.go:141] libmachine: Making call to close driver server
	I0923 13:42:18.888280  704724 main.go:141] libmachine: (test-preload-590749) Calling .Close
	I0923 13:42:18.888306  704724 main.go:141] libmachine: Making call to close driver server
	I0923 13:42:18.888328  704724 main.go:141] libmachine: (test-preload-590749) Calling .Close
	I0923 13:42:18.888559  704724 main.go:141] libmachine: Successfully made call to close driver server
	I0923 13:42:18.888577  704724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 13:42:18.888577  704724 main.go:141] libmachine: (test-preload-590749) DBG | Closing plugin on server side
	I0923 13:42:18.888585  704724 main.go:141] libmachine: Making call to close driver server
	I0923 13:42:18.888640  704724 main.go:141] libmachine: (test-preload-590749) Calling .Close
	I0923 13:42:18.888689  704724 main.go:141] libmachine: (test-preload-590749) DBG | Closing plugin on server side
	I0923 13:42:18.888708  704724 main.go:141] libmachine: Successfully made call to close driver server
	I0923 13:42:18.888724  704724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 13:42:18.888736  704724 main.go:141] libmachine: Making call to close driver server
	I0923 13:42:18.888744  704724 main.go:141] libmachine: (test-preload-590749) Calling .Close
	I0923 13:42:18.888859  704724 main.go:141] libmachine: Successfully made call to close driver server
	I0923 13:42:18.888885  704724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 13:42:18.888962  704724 main.go:141] libmachine: Successfully made call to close driver server
	I0923 13:42:18.888980  704724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 13:42:18.888967  704724 main.go:141] libmachine: (test-preload-590749) DBG | Closing plugin on server side
	I0923 13:42:18.898498  704724 main.go:141] libmachine: Making call to close driver server
	I0923 13:42:18.898519  704724 main.go:141] libmachine: (test-preload-590749) Calling .Close
	I0923 13:42:18.898805  704724 main.go:141] libmachine: Successfully made call to close driver server
	I0923 13:42:18.898824  704724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 13:42:18.898845  704724 main.go:141] libmachine: (test-preload-590749) DBG | Closing plugin on server side
	I0923 13:42:18.901508  704724 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0923 13:42:18.902947  704724 addons.go:510] duration metric: took 1.30389283s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0923 13:42:19.800007  704724 node_ready.go:53] node "test-preload-590749" has status "Ready":"False"
	I0923 13:42:21.802302  704724 node_ready.go:53] node "test-preload-590749" has status "Ready":"False"
	I0923 13:42:24.300030  704724 node_ready.go:53] node "test-preload-590749" has status "Ready":"False"
	I0923 13:42:24.800572  704724 node_ready.go:49] node "test-preload-590749" has status "Ready":"True"
	I0923 13:42:24.800602  704724 node_ready.go:38] duration metric: took 7.00409173s for node "test-preload-590749" to be "Ready" ...
	I0923 13:42:24.800612  704724 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:42:24.806973  704724 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-2jjh4" in "kube-system" namespace to be "Ready" ...
	I0923 13:42:24.814002  704724 pod_ready.go:93] pod "coredns-6d4b75cb6d-2jjh4" in "kube-system" namespace has status "Ready":"True"
	I0923 13:42:24.814032  704724 pod_ready.go:82] duration metric: took 7.021172ms for pod "coredns-6d4b75cb6d-2jjh4" in "kube-system" namespace to be "Ready" ...
	I0923 13:42:24.814043  704724 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-590749" in "kube-system" namespace to be "Ready" ...
	I0923 13:42:26.824640  704724 pod_ready.go:103] pod "etcd-test-preload-590749" in "kube-system" namespace has status "Ready":"False"
	I0923 13:42:28.320908  704724 pod_ready.go:93] pod "etcd-test-preload-590749" in "kube-system" namespace has status "Ready":"True"
	I0923 13:42:28.320934  704724 pod_ready.go:82] duration metric: took 3.506883696s for pod "etcd-test-preload-590749" in "kube-system" namespace to be "Ready" ...
	I0923 13:42:28.320951  704724 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-590749" in "kube-system" namespace to be "Ready" ...
	I0923 13:42:28.325889  704724 pod_ready.go:93] pod "kube-apiserver-test-preload-590749" in "kube-system" namespace has status "Ready":"True"
	I0923 13:42:28.325913  704724 pod_ready.go:82] duration metric: took 4.95584ms for pod "kube-apiserver-test-preload-590749" in "kube-system" namespace to be "Ready" ...
	I0923 13:42:28.325923  704724 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-590749" in "kube-system" namespace to be "Ready" ...
	I0923 13:42:28.331224  704724 pod_ready.go:93] pod "kube-controller-manager-test-preload-590749" in "kube-system" namespace has status "Ready":"True"
	I0923 13:42:28.331244  704724 pod_ready.go:82] duration metric: took 5.315104ms for pod "kube-controller-manager-test-preload-590749" in "kube-system" namespace to be "Ready" ...
	I0923 13:42:28.331253  704724 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mwmhn" in "kube-system" namespace to be "Ready" ...
	I0923 13:42:28.337159  704724 pod_ready.go:93] pod "kube-proxy-mwmhn" in "kube-system" namespace has status "Ready":"True"
	I0923 13:42:28.337181  704724 pod_ready.go:82] duration metric: took 5.92267ms for pod "kube-proxy-mwmhn" in "kube-system" namespace to be "Ready" ...
	I0923 13:42:28.337189  704724 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-590749" in "kube-system" namespace to be "Ready" ...
	I0923 13:42:28.401673  704724 pod_ready.go:93] pod "kube-scheduler-test-preload-590749" in "kube-system" namespace has status "Ready":"True"
	I0923 13:42:28.401701  704724 pod_ready.go:82] duration metric: took 64.504889ms for pod "kube-scheduler-test-preload-590749" in "kube-system" namespace to be "Ready" ...
	I0923 13:42:28.401711  704724 pod_ready.go:39] duration metric: took 3.601090562s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:42:28.401726  704724 api_server.go:52] waiting for apiserver process to appear ...
	I0923 13:42:28.401794  704724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:42:28.416990  704724 api_server.go:72] duration metric: took 10.817928449s to wait for apiserver process to appear ...
	I0923 13:42:28.417038  704724 api_server.go:88] waiting for apiserver healthz status ...
	I0923 13:42:28.417065  704724 api_server.go:253] Checking apiserver healthz at https://192.168.39.117:8443/healthz ...
	I0923 13:42:28.422572  704724 api_server.go:279] https://192.168.39.117:8443/healthz returned 200:
	ok
	I0923 13:42:28.423591  704724 api_server.go:141] control plane version: v1.24.4
	I0923 13:42:28.423617  704724 api_server.go:131] duration metric: took 6.571419ms to wait for apiserver health ...
	I0923 13:42:28.423626  704724 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 13:42:28.605004  704724 system_pods.go:59] 7 kube-system pods found
	I0923 13:42:28.605037  704724 system_pods.go:61] "coredns-6d4b75cb6d-2jjh4" [03310569-5e4c-4f81-bb9a-d119f6596262] Running
	I0923 13:42:28.605042  704724 system_pods.go:61] "etcd-test-preload-590749" [fa7552c2-c2ef-41dd-b2c7-d2c5e5c1d1e1] Running
	I0923 13:42:28.605046  704724 system_pods.go:61] "kube-apiserver-test-preload-590749" [80de45e8-af49-4f6f-aeb0-3c20a055c2b5] Running
	I0923 13:42:28.605049  704724 system_pods.go:61] "kube-controller-manager-test-preload-590749" [b4b3c9bf-e0b9-47cf-a59c-7d896bf588c3] Running
	I0923 13:42:28.605052  704724 system_pods.go:61] "kube-proxy-mwmhn" [06ed7780-2005-4f95-b6d6-12e7245599f4] Running
	I0923 13:42:28.605057  704724 system_pods.go:61] "kube-scheduler-test-preload-590749" [abdfd948-d7f1-47a5-add8-d1f308b40cfb] Running
	I0923 13:42:28.605061  704724 system_pods.go:61] "storage-provisioner" [6d1909f4-f774-470d-a5e1-202c80703d0a] Running
	I0923 13:42:28.605067  704724 system_pods.go:74] duration metric: took 181.435182ms to wait for pod list to return data ...
	I0923 13:42:28.605074  704724 default_sa.go:34] waiting for default service account to be created ...
	I0923 13:42:28.801377  704724 default_sa.go:45] found service account: "default"
	I0923 13:42:28.801406  704724 default_sa.go:55] duration metric: took 196.325879ms for default service account to be created ...
	I0923 13:42:28.801416  704724 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 13:42:29.003203  704724 system_pods.go:86] 7 kube-system pods found
	I0923 13:42:29.003239  704724 system_pods.go:89] "coredns-6d4b75cb6d-2jjh4" [03310569-5e4c-4f81-bb9a-d119f6596262] Running
	I0923 13:42:29.003245  704724 system_pods.go:89] "etcd-test-preload-590749" [fa7552c2-c2ef-41dd-b2c7-d2c5e5c1d1e1] Running
	I0923 13:42:29.003255  704724 system_pods.go:89] "kube-apiserver-test-preload-590749" [80de45e8-af49-4f6f-aeb0-3c20a055c2b5] Running
	I0923 13:42:29.003258  704724 system_pods.go:89] "kube-controller-manager-test-preload-590749" [b4b3c9bf-e0b9-47cf-a59c-7d896bf588c3] Running
	I0923 13:42:29.003262  704724 system_pods.go:89] "kube-proxy-mwmhn" [06ed7780-2005-4f95-b6d6-12e7245599f4] Running
	I0923 13:42:29.003266  704724 system_pods.go:89] "kube-scheduler-test-preload-590749" [abdfd948-d7f1-47a5-add8-d1f308b40cfb] Running
	I0923 13:42:29.003268  704724 system_pods.go:89] "storage-provisioner" [6d1909f4-f774-470d-a5e1-202c80703d0a] Running
	I0923 13:42:29.003275  704724 system_pods.go:126] duration metric: took 201.852935ms to wait for k8s-apps to be running ...
	I0923 13:42:29.003283  704724 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 13:42:29.003336  704724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:42:29.019439  704724 system_svc.go:56] duration metric: took 16.146089ms WaitForService to wait for kubelet
	I0923 13:42:29.019473  704724 kubeadm.go:582] duration metric: took 11.420418159s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:42:29.019506  704724 node_conditions.go:102] verifying NodePressure condition ...
	I0923 13:42:29.201437  704724 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:42:29.201471  704724 node_conditions.go:123] node cpu capacity is 2
	I0923 13:42:29.201486  704724 node_conditions.go:105] duration metric: took 181.974175ms to run NodePressure ...
	I0923 13:42:29.201503  704724 start.go:241] waiting for startup goroutines ...
	I0923 13:42:29.201513  704724 start.go:246] waiting for cluster config update ...
	I0923 13:42:29.201528  704724 start.go:255] writing updated cluster config ...
	I0923 13:42:29.201915  704724 ssh_runner.go:195] Run: rm -f paused
	I0923 13:42:29.252480  704724 start.go:600] kubectl: 1.31.1, cluster: 1.24.4 (minor skew: 7)
	I0923 13:42:29.254620  704724 out.go:201] 
	W0923 13:42:29.256261  704724 out.go:270] ! /usr/local/bin/kubectl is version 1.31.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0923 13:42:29.257716  704724 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0923 13:42:29.259509  704724 out.go:177] * Done! kubectl is now configured to use "test-preload-590749" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 23 13:42:30 test-preload-590749 crio[659]: time="2024-09-23 13:42:30.187800567Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098950187775000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15f4d4e9-2678-4344-9ded-873f526a0d7a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:42:30 test-preload-590749 crio[659]: time="2024-09-23 13:42:30.188552047Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e0c8c110-57fd-403f-9386-7243cb23d202 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:42:30 test-preload-590749 crio[659]: time="2024-09-23 13:42:30.188621158Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e0c8c110-57fd-403f-9386-7243cb23d202 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:42:30 test-preload-590749 crio[659]: time="2024-09-23 13:42:30.188799334Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7be309e9d7cdcdbe08c6e99da85fc2c9f8231bd5ffe216171e642e0e4df03b2,PodSandboxId:c1bb5773d93443b8fe7694e8016a3b4cf2b994d93414e005de1d033fa472ca52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727098943939767618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-2jjh4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03310569-5e4c-4f81-bb9a-d119f6596262,},Annotations:map[string]string{io.kubernetes.container.hash: 77bb2614,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:339ca64c0fcddfd78e9350b6bbdc5a455a9175d0cc9dedfd8e3934ddc6a7ad51,PodSandboxId:8c9653644d6cd1173b899597f02aed82dd28e18bbdbed76402f1ab8c5c959c0d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727098936742170511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 06ed7780-2005-4f95-b6d6-12e7245599f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7351a8a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41726e271e5653bfe10065c768b7d1f9b4b48bf4a8477965a93bda6604c1716,PodSandboxId:fb9567c023b5d291a0dd4bef469c5fd5f6edd0516a82750b68806abc8c3a38e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727098936421934900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d
1909f4-f774-470d-a5e1-202c80703d0a,},Annotations:map[string]string{io.kubernetes.container.hash: 73f98bac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdfbec2c95b82ef39d450557a353a7e69648f103ebbf0d96f158fade7a4ea92c,PodSandboxId:528dab2bd693892519c7da306cb07a2107f2b18a50cd4b9d334551785f992d0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727098930150231093,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-590749,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 150e42f5a4c10d34704aa1340b9e6d44,},Anno
tations:map[string]string{io.kubernetes.container.hash: c659d746,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a77f9b7af459943f7dfb3cd841ca2efb0aede6d83396821e7095d03fc7558b3,PodSandboxId:10e2b91c5387554c5a5e1b5ce8ee36fb2ebba54b60e32e2c935d7243d6cbcc62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727098930145224774,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-590749,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03dd82501b98b1923b0948d
8889ca265,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0294d0a0928f7fe196c7c9b8b474bdc62f2899b12a92811dee68d774e82508f4,PodSandboxId:70592d6ffbbad808c6d15bb8a1ce27cc395aad7ff2013e0022c3d214bf933f42,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727098930123613665,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-590749,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcf31facc488751ef756b1c46228732e,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c67f2e9df59c815ede2ea3d1708bc98883b24d9cccce9aab5a838e13931bd681,PodSandboxId:9d6bbc565d934355a8586dd73bebfb89f0d6909e22dbbb7bfdb9d243be460330,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727098930125417567,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-590749,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b84950c23c393955757f7a380bd5951e,},Annotation
s:map[string]string{io.kubernetes.container.hash: a9c76b37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e0c8c110-57fd-403f-9386-7243cb23d202 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:42:30 test-preload-590749 crio[659]: time="2024-09-23 13:42:30.228431863Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=882953a6-092a-42c7-ac95-aef0bf906ee6 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:42:30 test-preload-590749 crio[659]: time="2024-09-23 13:42:30.228525920Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=882953a6-092a-42c7-ac95-aef0bf906ee6 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:42:30 test-preload-590749 crio[659]: time="2024-09-23 13:42:30.230331313Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9314c5fd-701a-413b-a899-0b4ea664d18f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:42:30 test-preload-590749 crio[659]: time="2024-09-23 13:42:30.230773543Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098950230750706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9314c5fd-701a-413b-a899-0b4ea664d18f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:42:30 test-preload-590749 crio[659]: time="2024-09-23 13:42:30.231594799Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b140c64-3a86-4abb-bc76-ae6d9b881c40 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:42:30 test-preload-590749 crio[659]: time="2024-09-23 13:42:30.231648392Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b140c64-3a86-4abb-bc76-ae6d9b881c40 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:42:30 test-preload-590749 crio[659]: time="2024-09-23 13:42:30.231869900Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7be309e9d7cdcdbe08c6e99da85fc2c9f8231bd5ffe216171e642e0e4df03b2,PodSandboxId:c1bb5773d93443b8fe7694e8016a3b4cf2b994d93414e005de1d033fa472ca52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727098943939767618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-2jjh4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03310569-5e4c-4f81-bb9a-d119f6596262,},Annotations:map[string]string{io.kubernetes.container.hash: 77bb2614,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:339ca64c0fcddfd78e9350b6bbdc5a455a9175d0cc9dedfd8e3934ddc6a7ad51,PodSandboxId:8c9653644d6cd1173b899597f02aed82dd28e18bbdbed76402f1ab8c5c959c0d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727098936742170511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 06ed7780-2005-4f95-b6d6-12e7245599f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7351a8a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41726e271e5653bfe10065c768b7d1f9b4b48bf4a8477965a93bda6604c1716,PodSandboxId:fb9567c023b5d291a0dd4bef469c5fd5f6edd0516a82750b68806abc8c3a38e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727098936421934900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d
1909f4-f774-470d-a5e1-202c80703d0a,},Annotations:map[string]string{io.kubernetes.container.hash: 73f98bac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdfbec2c95b82ef39d450557a353a7e69648f103ebbf0d96f158fade7a4ea92c,PodSandboxId:528dab2bd693892519c7da306cb07a2107f2b18a50cd4b9d334551785f992d0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727098930150231093,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-590749,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 150e42f5a4c10d34704aa1340b9e6d44,},Anno
tations:map[string]string{io.kubernetes.container.hash: c659d746,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a77f9b7af459943f7dfb3cd841ca2efb0aede6d83396821e7095d03fc7558b3,PodSandboxId:10e2b91c5387554c5a5e1b5ce8ee36fb2ebba54b60e32e2c935d7243d6cbcc62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727098930145224774,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-590749,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03dd82501b98b1923b0948d
8889ca265,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0294d0a0928f7fe196c7c9b8b474bdc62f2899b12a92811dee68d774e82508f4,PodSandboxId:70592d6ffbbad808c6d15bb8a1ce27cc395aad7ff2013e0022c3d214bf933f42,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727098930123613665,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-590749,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcf31facc488751ef756b1c46228732e,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c67f2e9df59c815ede2ea3d1708bc98883b24d9cccce9aab5a838e13931bd681,PodSandboxId:9d6bbc565d934355a8586dd73bebfb89f0d6909e22dbbb7bfdb9d243be460330,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727098930125417567,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-590749,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b84950c23c393955757f7a380bd5951e,},Annotation
s:map[string]string{io.kubernetes.container.hash: a9c76b37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1b140c64-3a86-4abb-bc76-ae6d9b881c40 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:42:30 test-preload-590749 crio[659]: time="2024-09-23 13:42:30.270652293Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9468c74a-4ef6-4a4a-942d-b6ab1b2f74dd name=/runtime.v1.RuntimeService/Version
	Sep 23 13:42:30 test-preload-590749 crio[659]: time="2024-09-23 13:42:30.270740953Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9468c74a-4ef6-4a4a-942d-b6ab1b2f74dd name=/runtime.v1.RuntimeService/Version
	Sep 23 13:42:30 test-preload-590749 crio[659]: time="2024-09-23 13:42:30.271806598Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f02fa832-ed1c-49a6-9f46-569f39ef2611 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:42:30 test-preload-590749 crio[659]: time="2024-09-23 13:42:30.272527402Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098950272498258,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f02fa832-ed1c-49a6-9f46-569f39ef2611 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:42:30 test-preload-590749 crio[659]: time="2024-09-23 13:42:30.273145771Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2683c934-62cb-4341-a6ae-d55aee188fd0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:42:30 test-preload-590749 crio[659]: time="2024-09-23 13:42:30.273203379Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2683c934-62cb-4341-a6ae-d55aee188fd0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:42:30 test-preload-590749 crio[659]: time="2024-09-23 13:42:30.273364984Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7be309e9d7cdcdbe08c6e99da85fc2c9f8231bd5ffe216171e642e0e4df03b2,PodSandboxId:c1bb5773d93443b8fe7694e8016a3b4cf2b994d93414e005de1d033fa472ca52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727098943939767618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-2jjh4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03310569-5e4c-4f81-bb9a-d119f6596262,},Annotations:map[string]string{io.kubernetes.container.hash: 77bb2614,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:339ca64c0fcddfd78e9350b6bbdc5a455a9175d0cc9dedfd8e3934ddc6a7ad51,PodSandboxId:8c9653644d6cd1173b899597f02aed82dd28e18bbdbed76402f1ab8c5c959c0d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727098936742170511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 06ed7780-2005-4f95-b6d6-12e7245599f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7351a8a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41726e271e5653bfe10065c768b7d1f9b4b48bf4a8477965a93bda6604c1716,PodSandboxId:fb9567c023b5d291a0dd4bef469c5fd5f6edd0516a82750b68806abc8c3a38e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727098936421934900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d
1909f4-f774-470d-a5e1-202c80703d0a,},Annotations:map[string]string{io.kubernetes.container.hash: 73f98bac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdfbec2c95b82ef39d450557a353a7e69648f103ebbf0d96f158fade7a4ea92c,PodSandboxId:528dab2bd693892519c7da306cb07a2107f2b18a50cd4b9d334551785f992d0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727098930150231093,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-590749,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 150e42f5a4c10d34704aa1340b9e6d44,},Anno
tations:map[string]string{io.kubernetes.container.hash: c659d746,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a77f9b7af459943f7dfb3cd841ca2efb0aede6d83396821e7095d03fc7558b3,PodSandboxId:10e2b91c5387554c5a5e1b5ce8ee36fb2ebba54b60e32e2c935d7243d6cbcc62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727098930145224774,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-590749,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03dd82501b98b1923b0948d
8889ca265,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0294d0a0928f7fe196c7c9b8b474bdc62f2899b12a92811dee68d774e82508f4,PodSandboxId:70592d6ffbbad808c6d15bb8a1ce27cc395aad7ff2013e0022c3d214bf933f42,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727098930123613665,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-590749,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcf31facc488751ef756b1c46228732e,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c67f2e9df59c815ede2ea3d1708bc98883b24d9cccce9aab5a838e13931bd681,PodSandboxId:9d6bbc565d934355a8586dd73bebfb89f0d6909e22dbbb7bfdb9d243be460330,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727098930125417567,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-590749,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b84950c23c393955757f7a380bd5951e,},Annotation
s:map[string]string{io.kubernetes.container.hash: a9c76b37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2683c934-62cb-4341-a6ae-d55aee188fd0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:42:30 test-preload-590749 crio[659]: time="2024-09-23 13:42:30.306238219Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=df34ef58-87ca-41db-bc98-a1afc7829a76 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:42:30 test-preload-590749 crio[659]: time="2024-09-23 13:42:30.306315725Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=df34ef58-87ca-41db-bc98-a1afc7829a76 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:42:30 test-preload-590749 crio[659]: time="2024-09-23 13:42:30.308031784Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6a2281cb-4468-4884-bc31-f2a889cc637e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:42:30 test-preload-590749 crio[659]: time="2024-09-23 13:42:30.308597517Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098950308570427,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a2281cb-4468-4884-bc31-f2a889cc637e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:42:30 test-preload-590749 crio[659]: time="2024-09-23 13:42:30.309508218Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d1eda5d-a8a6-46a5-be70-841959e57708 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:42:30 test-preload-590749 crio[659]: time="2024-09-23 13:42:30.309564103Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d1eda5d-a8a6-46a5-be70-841959e57708 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:42:30 test-preload-590749 crio[659]: time="2024-09-23 13:42:30.309732875Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7be309e9d7cdcdbe08c6e99da85fc2c9f8231bd5ffe216171e642e0e4df03b2,PodSandboxId:c1bb5773d93443b8fe7694e8016a3b4cf2b994d93414e005de1d033fa472ca52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727098943939767618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-2jjh4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03310569-5e4c-4f81-bb9a-d119f6596262,},Annotations:map[string]string{io.kubernetes.container.hash: 77bb2614,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:339ca64c0fcddfd78e9350b6bbdc5a455a9175d0cc9dedfd8e3934ddc6a7ad51,PodSandboxId:8c9653644d6cd1173b899597f02aed82dd28e18bbdbed76402f1ab8c5c959c0d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727098936742170511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwmhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 06ed7780-2005-4f95-b6d6-12e7245599f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7351a8a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41726e271e5653bfe10065c768b7d1f9b4b48bf4a8477965a93bda6604c1716,PodSandboxId:fb9567c023b5d291a0dd4bef469c5fd5f6edd0516a82750b68806abc8c3a38e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727098936421934900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d
1909f4-f774-470d-a5e1-202c80703d0a,},Annotations:map[string]string{io.kubernetes.container.hash: 73f98bac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdfbec2c95b82ef39d450557a353a7e69648f103ebbf0d96f158fade7a4ea92c,PodSandboxId:528dab2bd693892519c7da306cb07a2107f2b18a50cd4b9d334551785f992d0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727098930150231093,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-590749,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 150e42f5a4c10d34704aa1340b9e6d44,},Anno
tations:map[string]string{io.kubernetes.container.hash: c659d746,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a77f9b7af459943f7dfb3cd841ca2efb0aede6d83396821e7095d03fc7558b3,PodSandboxId:10e2b91c5387554c5a5e1b5ce8ee36fb2ebba54b60e32e2c935d7243d6cbcc62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727098930145224774,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-590749,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03dd82501b98b1923b0948d
8889ca265,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0294d0a0928f7fe196c7c9b8b474bdc62f2899b12a92811dee68d774e82508f4,PodSandboxId:70592d6ffbbad808c6d15bb8a1ce27cc395aad7ff2013e0022c3d214bf933f42,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727098930123613665,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-590749,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcf31facc488751ef756b1c46228732e,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c67f2e9df59c815ede2ea3d1708bc98883b24d9cccce9aab5a838e13931bd681,PodSandboxId:9d6bbc565d934355a8586dd73bebfb89f0d6909e22dbbb7bfdb9d243be460330,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727098930125417567,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-590749,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b84950c23c393955757f7a380bd5951e,},Annotation
s:map[string]string{io.kubernetes.container.hash: a9c76b37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d1eda5d-a8a6-46a5-be70-841959e57708 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d7be309e9d7cd       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   6 seconds ago       Running             coredns                   1                   c1bb5773d9344       coredns-6d4b75cb6d-2jjh4
	339ca64c0fcdd       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   13 seconds ago      Running             kube-proxy                1                   8c9653644d6cd       kube-proxy-mwmhn
	e41726e271e56       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       1                   fb9567c023b5d       storage-provisioner
	cdfbec2c95b82       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   528dab2bd6938       etcd-test-preload-590749
	8a77f9b7af459       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   10e2b91c53875       kube-controller-manager-test-preload-590749
	c67f2e9df59c8       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            1                   9d6bbc565d934       kube-apiserver-test-preload-590749
	0294d0a0928f7       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   70592d6ffbbad       kube-scheduler-test-preload-590749
	
	
	==> coredns [d7be309e9d7cdcdbe08c6e99da85fc2c9f8231bd5ffe216171e642e0e4df03b2] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:50023 - 25948 "HINFO IN 1609064725195947956.1188785018913038327. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017801394s
	
	
	==> describe nodes <==
	Name:               test-preload-590749
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-590749
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=test-preload-590749
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T13_40_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 13:40:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-590749
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:42:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:42:24 +0000   Mon, 23 Sep 2024 13:40:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:42:24 +0000   Mon, 23 Sep 2024 13:40:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:42:24 +0000   Mon, 23 Sep 2024 13:40:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:42:24 +0000   Mon, 23 Sep 2024 13:42:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.117
	  Hostname:    test-preload-590749
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0bfe534269d04b97bc3b3f612dd1e91a
	  System UUID:                0bfe5342-69d0-4b97-bc3b-3f612dd1e91a
	  Boot ID:                    805a0a33-405c-44ba-887c-784ea0d7be17
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-2jjh4                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     85s
	  kube-system                 etcd-test-preload-590749                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         98s
	  kube-system                 kube-apiserver-test-preload-590749             250m (12%)    0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-controller-manager-test-preload-590749    200m (10%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-proxy-mwmhn                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-scheduler-test-preload-590749             100m (5%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 13s                  kube-proxy       
	  Normal  Starting                 84s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  106s (x5 over 107s)  kubelet          Node test-preload-590749 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s (x5 over 107s)  kubelet          Node test-preload-590749 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s (x5 over 107s)  kubelet          Node test-preload-590749 status is now: NodeHasSufficientPID
	  Normal  Starting                 98s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  98s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  98s                  kubelet          Node test-preload-590749 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s                  kubelet          Node test-preload-590749 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s                  kubelet          Node test-preload-590749 status is now: NodeHasSufficientPID
	  Normal  NodeReady                88s                  kubelet          Node test-preload-590749 status is now: NodeReady
	  Normal  RegisteredNode           86s                  node-controller  Node test-preload-590749 event: Registered Node test-preload-590749 in Controller
	  Normal  Starting                 21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)    kubelet          Node test-preload-590749 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)    kubelet          Node test-preload-590749 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)    kubelet          Node test-preload-590749 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                   node-controller  Node test-preload-590749 event: Registered Node test-preload-590749 in Controller
	
	
	==> dmesg <==
	[Sep23 13:41] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051998] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037948] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.064921] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.948925] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.569663] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.205717] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.059996] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058724] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.160751] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.134129] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.270095] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[Sep23 13:42] systemd-fstab-generator[980]: Ignoring "noauto" option for root device
	[  +0.061826] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.715963] systemd-fstab-generator[1110]: Ignoring "noauto" option for root device
	[  +6.217858] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.243543] systemd-fstab-generator[1749]: Ignoring "noauto" option for root device
	[  +6.076503] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [cdfbec2c95b82ef39d450557a353a7e69648f103ebbf0d96f158fade7a4ea92c] <==
	{"level":"info","ts":"2024-09-23T13:42:10.496Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"d85ef093c7464643","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-23T13:42:10.504Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-23T13:42:10.504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d85ef093c7464643 switched to configuration voters=(15591163477497366083)"}
	{"level":"info","ts":"2024-09-23T13:42:10.519Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"44831ab0f42e7be7","local-member-id":"d85ef093c7464643","added-peer-id":"d85ef093c7464643","added-peer-peer-urls":["https://192.168.39.117:2380"]}
	{"level":"info","ts":"2024-09-23T13:42:10.522Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-23T13:42:10.523Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d85ef093c7464643","initial-advertise-peer-urls":["https://192.168.39.117:2380"],"listen-peer-urls":["https://192.168.39.117:2380"],"advertise-client-urls":["https://192.168.39.117:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.117:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-23T13:42:10.523Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"44831ab0f42e7be7","local-member-id":"d85ef093c7464643","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:42:10.526Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:42:10.524Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.117:2380"}
	{"level":"info","ts":"2024-09-23T13:42:10.524Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-23T13:42:10.532Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.117:2380"}
	{"level":"info","ts":"2024-09-23T13:42:12.061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d85ef093c7464643 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-23T13:42:12.061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d85ef093c7464643 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-23T13:42:12.061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d85ef093c7464643 received MsgPreVoteResp from d85ef093c7464643 at term 2"}
	{"level":"info","ts":"2024-09-23T13:42:12.061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d85ef093c7464643 became candidate at term 3"}
	{"level":"info","ts":"2024-09-23T13:42:12.061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d85ef093c7464643 received MsgVoteResp from d85ef093c7464643 at term 3"}
	{"level":"info","ts":"2024-09-23T13:42:12.061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d85ef093c7464643 became leader at term 3"}
	{"level":"info","ts":"2024-09-23T13:42:12.061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d85ef093c7464643 elected leader d85ef093c7464643 at term 3"}
	{"level":"info","ts":"2024-09-23T13:42:12.065Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"d85ef093c7464643","local-member-attributes":"{Name:test-preload-590749 ClientURLs:[https://192.168.39.117:2379]}","request-path":"/0/members/d85ef093c7464643/attributes","cluster-id":"44831ab0f42e7be7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T13:42:12.065Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T13:42:12.066Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T13:42:12.066Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T13:42:12.066Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T13:42:12.067Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.117:2379"}
	{"level":"info","ts":"2024-09-23T13:42:12.067Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 13:42:30 up 0 min,  0 users,  load average: 0.50, 0.14, 0.05
	Linux test-preload-590749 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c67f2e9df59c815ede2ea3d1708bc98883b24d9cccce9aab5a838e13931bd681] <==
	I0923 13:42:14.480827       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0923 13:42:14.480851       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0923 13:42:14.500983       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0923 13:42:14.501171       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0923 13:42:14.501355       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0923 13:42:14.515198       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0923 13:42:14.516442       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0923 13:42:14.602127       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0923 13:42:14.655800       1 cache.go:39] Caches are synced for autoregister controller
	I0923 13:42:14.656048       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0923 13:42:14.657146       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0923 13:42:14.657243       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0923 13:42:14.657578       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0923 13:42:14.671047       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0923 13:42:14.692413       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0923 13:42:15.155462       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0923 13:42:15.469530       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0923 13:42:16.089404       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0923 13:42:16.105256       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0923 13:42:16.138048       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0923 13:42:16.154429       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0923 13:42:16.160376       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0923 13:42:16.974388       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0923 13:42:27.286501       1 controller.go:611] quota admission added evaluator for: endpoints
	I0923 13:42:27.339232       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8a77f9b7af459943f7dfb3cd841ca2efb0aede6d83396821e7095d03fc7558b3] <==
	I0923 13:42:27.097917       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0923 13:42:27.100289       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0923 13:42:27.105671       1 shared_informer.go:262] Caches are synced for taint
	I0923 13:42:27.105862       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0923 13:42:27.105988       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-590749. Assuming now as a timestamp.
	I0923 13:42:27.106046       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0923 13:42:27.106127       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0923 13:42:27.107068       1 event.go:294] "Event occurred" object="test-preload-590749" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-590749 event: Registered Node test-preload-590749 in Controller"
	I0923 13:42:27.119956       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0923 13:42:27.122872       1 shared_informer.go:262] Caches are synced for GC
	I0923 13:42:27.130437       1 shared_informer.go:262] Caches are synced for HPA
	I0923 13:42:27.237335       1 shared_informer.go:262] Caches are synced for persistent volume
	I0923 13:42:27.252499       1 shared_informer.go:262] Caches are synced for stateful set
	I0923 13:42:27.256004       1 shared_informer.go:262] Caches are synced for PVC protection
	I0923 13:42:27.257064       1 shared_informer.go:262] Caches are synced for attach detach
	I0923 13:42:27.271928       1 shared_informer.go:262] Caches are synced for ephemeral
	I0923 13:42:27.275454       1 shared_informer.go:262] Caches are synced for endpoint
	I0923 13:42:27.292128       1 shared_informer.go:262] Caches are synced for resource quota
	I0923 13:42:27.295692       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0923 13:42:27.308767       1 shared_informer.go:262] Caches are synced for expand
	I0923 13:42:27.313640       1 shared_informer.go:262] Caches are synced for resource quota
	I0923 13:42:27.328392       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0923 13:42:27.748497       1 shared_informer.go:262] Caches are synced for garbage collector
	I0923 13:42:27.748603       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0923 13:42:27.755455       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [339ca64c0fcddfd78e9350b6bbdc5a455a9175d0cc9dedfd8e3934ddc6a7ad51] <==
	I0923 13:42:16.931363       1 node.go:163] Successfully retrieved node IP: 192.168.39.117
	I0923 13:42:16.931533       1 server_others.go:138] "Detected node IP" address="192.168.39.117"
	I0923 13:42:16.931585       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0923 13:42:16.965527       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0923 13:42:16.965544       1 server_others.go:206] "Using iptables Proxier"
	I0923 13:42:16.965581       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0923 13:42:16.966668       1 server.go:661] "Version info" version="v1.24.4"
	I0923 13:42:16.966687       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:42:16.968278       1 config.go:317] "Starting service config controller"
	I0923 13:42:16.968497       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0923 13:42:16.968561       1 config.go:226] "Starting endpoint slice config controller"
	I0923 13:42:16.968580       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0923 13:42:16.969668       1 config.go:444] "Starting node config controller"
	I0923 13:42:16.972204       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0923 13:42:17.068777       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0923 13:42:17.068803       1 shared_informer.go:262] Caches are synced for service config
	I0923 13:42:17.072771       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [0294d0a0928f7fe196c7c9b8b474bdc62f2899b12a92811dee68d774e82508f4] <==
	I0923 13:42:11.070640       1 serving.go:348] Generated self-signed cert in-memory
	W0923 13:42:14.536782       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0923 13:42:14.536939       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0923 13:42:14.537025       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0923 13:42:14.537050       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0923 13:42:14.601960       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0923 13:42:14.602060       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:42:14.611005       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0923 13:42:14.611298       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0923 13:42:14.613322       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 13:42:14.611322       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0923 13:42:14.714929       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 13:42:15 test-preload-590749 kubelet[1117]: I0923 13:42:15.468218    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl78j\" (UniqueName: \"kubernetes.io/projected/03310569-5e4c-4f81-bb9a-d119f6596262-kube-api-access-nl78j\") pod \"coredns-6d4b75cb6d-2jjh4\" (UID: \"03310569-5e4c-4f81-bb9a-d119f6596262\") " pod="kube-system/coredns-6d4b75cb6d-2jjh4"
	Sep 23 13:42:15 test-preload-590749 kubelet[1117]: I0923 13:42:15.468237    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4qst\" (UniqueName: \"kubernetes.io/projected/6d1909f4-f774-470d-a5e1-202c80703d0a-kube-api-access-h4qst\") pod \"storage-provisioner\" (UID: \"6d1909f4-f774-470d-a5e1-202c80703d0a\") " pod="kube-system/storage-provisioner"
	Sep 23 13:42:15 test-preload-590749 kubelet[1117]: I0923 13:42:15.468264    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06ed7780-2005-4f95-b6d6-12e7245599f4-lib-modules\") pod \"kube-proxy-mwmhn\" (UID: \"06ed7780-2005-4f95-b6d6-12e7245599f4\") " pod="kube-system/kube-proxy-mwmhn"
	Sep 23 13:42:15 test-preload-590749 kubelet[1117]: I0923 13:42:15.468283    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03310569-5e4c-4f81-bb9a-d119f6596262-config-volume\") pod \"coredns-6d4b75cb6d-2jjh4\" (UID: \"03310569-5e4c-4f81-bb9a-d119f6596262\") " pod="kube-system/coredns-6d4b75cb6d-2jjh4"
	Sep 23 13:42:15 test-preload-590749 kubelet[1117]: I0923 13:42:15.468302    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06ed7780-2005-4f95-b6d6-12e7245599f4-xtables-lock\") pod \"kube-proxy-mwmhn\" (UID: \"06ed7780-2005-4f95-b6d6-12e7245599f4\") " pod="kube-system/kube-proxy-mwmhn"
	Sep 23 13:42:15 test-preload-590749 kubelet[1117]: I0923 13:42:15.468339    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnklw\" (UniqueName: \"kubernetes.io/projected/06ed7780-2005-4f95-b6d6-12e7245599f4-kube-api-access-tnklw\") pod \"kube-proxy-mwmhn\" (UID: \"06ed7780-2005-4f95-b6d6-12e7245599f4\") " pod="kube-system/kube-proxy-mwmhn"
	Sep 23 13:42:15 test-preload-590749 kubelet[1117]: I0923 13:42:15.468352    1117 reconciler.go:159] "Reconciler: start to sync state"
	Sep 23 13:42:15 test-preload-590749 kubelet[1117]: I0923 13:42:15.907729    1117 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/932e91cd-07e7-4cc9-9001-35e7477e759c-config-volume\") pod \"932e91cd-07e7-4cc9-9001-35e7477e759c\" (UID: \"932e91cd-07e7-4cc9-9001-35e7477e759c\") "
	Sep 23 13:42:15 test-preload-590749 kubelet[1117]: I0923 13:42:15.907832    1117 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5bjg\" (UniqueName: \"kubernetes.io/projected/932e91cd-07e7-4cc9-9001-35e7477e759c-kube-api-access-r5bjg\") pod \"932e91cd-07e7-4cc9-9001-35e7477e759c\" (UID: \"932e91cd-07e7-4cc9-9001-35e7477e759c\") "
	Sep 23 13:42:15 test-preload-590749 kubelet[1117]: W0923 13:42:15.909522    1117 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/932e91cd-07e7-4cc9-9001-35e7477e759c/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Sep 23 13:42:15 test-preload-590749 kubelet[1117]: I0923 13:42:15.910234    1117 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/932e91cd-07e7-4cc9-9001-35e7477e759c-config-volume" (OuterVolumeSpecName: "config-volume") pod "932e91cd-07e7-4cc9-9001-35e7477e759c" (UID: "932e91cd-07e7-4cc9-9001-35e7477e759c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 23 13:42:15 test-preload-590749 kubelet[1117]: E0923 13:42:15.910444    1117 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 23 13:42:15 test-preload-590749 kubelet[1117]: E0923 13:42:15.910649    1117 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/03310569-5e4c-4f81-bb9a-d119f6596262-config-volume podName:03310569-5e4c-4f81-bb9a-d119f6596262 nodeName:}" failed. No retries permitted until 2024-09-23 13:42:16.410576 +0000 UTC m=+7.131720825 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/03310569-5e4c-4f81-bb9a-d119f6596262-config-volume") pod "coredns-6d4b75cb6d-2jjh4" (UID: "03310569-5e4c-4f81-bb9a-d119f6596262") : object "kube-system"/"coredns" not registered
	Sep 23 13:42:15 test-preload-590749 kubelet[1117]: W0923 13:42:15.910654    1117 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/932e91cd-07e7-4cc9-9001-35e7477e759c/volumes/kubernetes.io~projected/kube-api-access-r5bjg: clearQuota called, but quotas disabled
	Sep 23 13:42:15 test-preload-590749 kubelet[1117]: I0923 13:42:15.911054    1117 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/932e91cd-07e7-4cc9-9001-35e7477e759c-kube-api-access-r5bjg" (OuterVolumeSpecName: "kube-api-access-r5bjg") pod "932e91cd-07e7-4cc9-9001-35e7477e759c" (UID: "932e91cd-07e7-4cc9-9001-35e7477e759c"). InnerVolumeSpecName "kube-api-access-r5bjg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 13:42:16 test-preload-590749 kubelet[1117]: I0923 13:42:16.008345    1117 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/932e91cd-07e7-4cc9-9001-35e7477e759c-config-volume\") on node \"test-preload-590749\" DevicePath \"\""
	Sep 23 13:42:16 test-preload-590749 kubelet[1117]: I0923 13:42:16.008388    1117 reconciler.go:384] "Volume detached for volume \"kube-api-access-r5bjg\" (UniqueName: \"kubernetes.io/projected/932e91cd-07e7-4cc9-9001-35e7477e759c-kube-api-access-r5bjg\") on node \"test-preload-590749\" DevicePath \"\""
	Sep 23 13:42:16 test-preload-590749 kubelet[1117]: E0923 13:42:16.411175    1117 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 23 13:42:16 test-preload-590749 kubelet[1117]: E0923 13:42:16.411872    1117 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/03310569-5e4c-4f81-bb9a-d119f6596262-config-volume podName:03310569-5e4c-4f81-bb9a-d119f6596262 nodeName:}" failed. No retries permitted until 2024-09-23 13:42:17.41178161 +0000 UTC m=+8.132926447 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/03310569-5e4c-4f81-bb9a-d119f6596262-config-volume") pod "coredns-6d4b75cb6d-2jjh4" (UID: "03310569-5e4c-4f81-bb9a-d119f6596262") : object "kube-system"/"coredns" not registered
	Sep 23 13:42:17 test-preload-590749 kubelet[1117]: E0923 13:42:17.420347    1117 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 23 13:42:17 test-preload-590749 kubelet[1117]: E0923 13:42:17.420421    1117 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/03310569-5e4c-4f81-bb9a-d119f6596262-config-volume podName:03310569-5e4c-4f81-bb9a-d119f6596262 nodeName:}" failed. No retries permitted until 2024-09-23 13:42:19.42039977 +0000 UTC m=+10.141544607 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/03310569-5e4c-4f81-bb9a-d119f6596262-config-volume") pod "coredns-6d4b75cb6d-2jjh4" (UID: "03310569-5e4c-4f81-bb9a-d119f6596262") : object "kube-system"/"coredns" not registered
	Sep 23 13:42:17 test-preload-590749 kubelet[1117]: E0923 13:42:17.506351    1117 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-2jjh4" podUID=03310569-5e4c-4f81-bb9a-d119f6596262
	Sep 23 13:42:17 test-preload-590749 kubelet[1117]: I0923 13:42:17.511691    1117 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=932e91cd-07e7-4cc9-9001-35e7477e759c path="/var/lib/kubelet/pods/932e91cd-07e7-4cc9-9001-35e7477e759c/volumes"
	Sep 23 13:42:19 test-preload-590749 kubelet[1117]: E0923 13:42:19.437417    1117 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 23 13:42:19 test-preload-590749 kubelet[1117]: E0923 13:42:19.437942    1117 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/03310569-5e4c-4f81-bb9a-d119f6596262-config-volume podName:03310569-5e4c-4f81-bb9a-d119f6596262 nodeName:}" failed. No retries permitted until 2024-09-23 13:42:23.4379183 +0000 UTC m=+14.159063137 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/03310569-5e4c-4f81-bb9a-d119f6596262-config-volume") pod "coredns-6d4b75cb6d-2jjh4" (UID: "03310569-5e4c-4f81-bb9a-d119f6596262") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [e41726e271e5653bfe10065c768b7d1f9b4b48bf4a8477965a93bda6604c1716] <==
	I0923 13:42:16.501232       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-590749 -n test-preload-590749
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-590749 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-590749" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-590749
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-590749: (1.183990221s)
--- FAIL: TestPreload (175.51s)

                                                
                                    
x
+
TestKubernetesUpgrade (412.44s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-678282 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-678282 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m32.170949068s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-678282] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-662205/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-662205/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-678282" primary control-plane node in "kubernetes-upgrade-678282" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 13:44:24.227933  706229 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:44:24.228067  706229 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:44:24.228074  706229 out.go:358] Setting ErrFile to fd 2...
	I0923 13:44:24.228079  706229 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:44:24.228428  706229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-662205/.minikube/bin
	I0923 13:44:24.229105  706229 out.go:352] Setting JSON to false
	I0923 13:44:24.230457  706229 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":12407,"bootTime":1727086657,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 13:44:24.230544  706229 start.go:139] virtualization: kvm guest
	I0923 13:44:24.233295  706229 out.go:177] * [kubernetes-upgrade-678282] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 13:44:24.235555  706229 notify.go:220] Checking for updates...
	I0923 13:44:24.237027  706229 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 13:44:24.240441  706229 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:44:24.243587  706229 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 13:44:24.247266  706229 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 13:44:24.249524  706229 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 13:44:24.250997  706229 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 13:44:24.252535  706229 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:44:24.294866  706229 out.go:177] * Using the kvm2 driver based on user configuration
	I0923 13:44:24.296336  706229 start.go:297] selected driver: kvm2
	I0923 13:44:24.296351  706229 start.go:901] validating driver "kvm2" against <nil>
	I0923 13:44:24.296365  706229 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 13:44:24.297319  706229 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 13:44:24.297420  706229 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19690-662205/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 13:44:24.315149  706229 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 13:44:24.315214  706229 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 13:44:24.315550  706229 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 13:44:24.315589  706229 cni.go:84] Creating CNI manager for ""
	I0923 13:44:24.315650  706229 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 13:44:24.315684  706229 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 13:44:24.315766  706229 start.go:340] cluster config:
	{Name:kubernetes-upgrade-678282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-678282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:44:24.315914  706229 iso.go:125] acquiring lock: {Name:mkb968a95eae3838cd5c328cf3385c2ef4ff2c8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 13:44:24.317979  706229 out.go:177] * Starting "kubernetes-upgrade-678282" primary control-plane node in "kubernetes-upgrade-678282" cluster
	I0923 13:44:24.319467  706229 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0923 13:44:24.319517  706229 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0923 13:44:24.319529  706229 cache.go:56] Caching tarball of preloaded images
	I0923 13:44:24.319628  706229 preload.go:172] Found /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 13:44:24.319641  706229 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0923 13:44:24.319966  706229 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/config.json ...
	I0923 13:44:24.320000  706229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/config.json: {Name:mkc5b25f4261643489b7117b0edbbad7e438452c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:44:24.320140  706229 start.go:360] acquireMachinesLock for kubernetes-upgrade-678282: {Name:mka98570d4b4becad22300323f1f88e64743eec3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 13:44:24.320189  706229 start.go:364] duration metric: took 31.884µs to acquireMachinesLock for "kubernetes-upgrade-678282"
	I0923 13:44:24.320213  706229 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-678282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-678282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 13:44:24.320267  706229 start.go:125] createHost starting for "" (driver="kvm2")
	I0923 13:44:24.322342  706229 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 13:44:24.322494  706229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:44:24.322545  706229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:44:24.339896  706229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40777
	I0923 13:44:24.340391  706229 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:44:24.341018  706229 main.go:141] libmachine: Using API Version  1
	I0923 13:44:24.341040  706229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:44:24.341542  706229 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:44:24.341873  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetMachineName
	I0923 13:44:24.342085  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .DriverName
	I0923 13:44:24.342336  706229 start.go:159] libmachine.API.Create for "kubernetes-upgrade-678282" (driver="kvm2")
	I0923 13:44:24.342381  706229 client.go:168] LocalClient.Create starting
	I0923 13:44:24.342428  706229 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem
	I0923 13:44:24.342471  706229 main.go:141] libmachine: Decoding PEM data...
	I0923 13:44:24.342492  706229 main.go:141] libmachine: Parsing certificate...
	I0923 13:44:24.342553  706229 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem
	I0923 13:44:24.342577  706229 main.go:141] libmachine: Decoding PEM data...
	I0923 13:44:24.342590  706229 main.go:141] libmachine: Parsing certificate...
	I0923 13:44:24.342604  706229 main.go:141] libmachine: Running pre-create checks...
	I0923 13:44:24.342620  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .PreCreateCheck
	I0923 13:44:24.343045  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetConfigRaw
	I0923 13:44:24.343472  706229 main.go:141] libmachine: Creating machine...
	I0923 13:44:24.343489  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .Create
	I0923 13:44:24.343648  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Creating KVM machine...
	I0923 13:44:24.344978  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | found existing default KVM network
	I0923 13:44:24.345805  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | I0923 13:44:24.345634  706288 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000187a30}
	I0923 13:44:24.345855  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | created network xml: 
	I0923 13:44:24.345871  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | <network>
	I0923 13:44:24.345884  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG |   <name>mk-kubernetes-upgrade-678282</name>
	I0923 13:44:24.345894  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG |   <dns enable='no'/>
	I0923 13:44:24.345904  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG |   
	I0923 13:44:24.345918  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0923 13:44:24.345928  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG |     <dhcp>
	I0923 13:44:24.345938  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0923 13:44:24.345950  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG |     </dhcp>
	I0923 13:44:24.345959  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG |   </ip>
	I0923 13:44:24.345966  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG |   
	I0923 13:44:24.345977  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | </network>
	I0923 13:44:24.345984  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | 
	I0923 13:44:24.351903  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | trying to create private KVM network mk-kubernetes-upgrade-678282 192.168.39.0/24...
	I0923 13:44:24.424655  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Setting up store path in /home/jenkins/minikube-integration/19690-662205/.minikube/machines/kubernetes-upgrade-678282 ...
	I0923 13:44:24.424683  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | private KVM network mk-kubernetes-upgrade-678282 192.168.39.0/24 created
	I0923 13:44:24.424692  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Building disk image from file:///home/jenkins/minikube-integration/19690-662205/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 13:44:24.424713  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Downloading /home/jenkins/minikube-integration/19690-662205/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19690-662205/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 13:44:24.424752  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | I0923 13:44:24.424584  706288 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 13:44:24.702789  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | I0923 13:44:24.702648  706288 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/kubernetes-upgrade-678282/id_rsa...
	I0923 13:44:24.777539  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | I0923 13:44:24.777373  706288 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/kubernetes-upgrade-678282/kubernetes-upgrade-678282.rawdisk...
	I0923 13:44:24.777577  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | Writing magic tar header
	I0923 13:44:24.777597  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | Writing SSH key tar header
	I0923 13:44:24.777616  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | I0923 13:44:24.777523  706288 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19690-662205/.minikube/machines/kubernetes-upgrade-678282 ...
	I0923 13:44:24.777672  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/kubernetes-upgrade-678282
	I0923 13:44:24.777687  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube/machines/kubernetes-upgrade-678282 (perms=drwx------)
	I0923 13:44:24.777705  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube/machines
	I0923 13:44:24.777724  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube/machines (perms=drwxr-xr-x)
	I0923 13:44:24.777740  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205/.minikube (perms=drwxr-xr-x)
	I0923 13:44:24.777748  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 13:44:24.777756  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-662205
	I0923 13:44:24.777767  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 13:44:24.777785  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | Checking permissions on dir: /home/jenkins
	I0923 13:44:24.777806  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Setting executable bit set on /home/jenkins/minikube-integration/19690-662205 (perms=drwxrwxr-x)
	I0923 13:44:24.777818  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | Checking permissions on dir: /home
	I0923 13:44:24.777853  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | Skipping /home - not owner
	I0923 13:44:24.777950  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 13:44:24.777990  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 13:44:24.778015  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Creating domain...
	I0923 13:44:24.779039  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) define libvirt domain using xml: 
	I0923 13:44:24.779102  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) <domain type='kvm'>
	I0923 13:44:24.779118  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)   <name>kubernetes-upgrade-678282</name>
	I0923 13:44:24.779140  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)   <memory unit='MiB'>2200</memory>
	I0923 13:44:24.779147  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)   <vcpu>2</vcpu>
	I0923 13:44:24.779157  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)   <features>
	I0923 13:44:24.779187  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)     <acpi/>
	I0923 13:44:24.779207  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)     <apic/>
	I0923 13:44:24.779218  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)     <pae/>
	I0923 13:44:24.779236  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)     
	I0923 13:44:24.779248  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)   </features>
	I0923 13:44:24.779264  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)   <cpu mode='host-passthrough'>
	I0923 13:44:24.779275  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)   
	I0923 13:44:24.779282  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)   </cpu>
	I0923 13:44:24.779290  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)   <os>
	I0923 13:44:24.779302  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)     <type>hvm</type>
	I0923 13:44:24.779313  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)     <boot dev='cdrom'/>
	I0923 13:44:24.779322  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)     <boot dev='hd'/>
	I0923 13:44:24.779330  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)     <bootmenu enable='no'/>
	I0923 13:44:24.779336  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)   </os>
	I0923 13:44:24.779344  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)   <devices>
	I0923 13:44:24.779356  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)     <disk type='file' device='cdrom'>
	I0923 13:44:24.779374  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)       <source file='/home/jenkins/minikube-integration/19690-662205/.minikube/machines/kubernetes-upgrade-678282/boot2docker.iso'/>
	I0923 13:44:24.779392  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)       <target dev='hdc' bus='scsi'/>
	I0923 13:44:24.779402  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)       <readonly/>
	I0923 13:44:24.779412  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)     </disk>
	I0923 13:44:24.779425  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)     <disk type='file' device='disk'>
	I0923 13:44:24.779441  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 13:44:24.779458  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)       <source file='/home/jenkins/minikube-integration/19690-662205/.minikube/machines/kubernetes-upgrade-678282/kubernetes-upgrade-678282.rawdisk'/>
	I0923 13:44:24.779474  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)       <target dev='hda' bus='virtio'/>
	I0923 13:44:24.779486  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)     </disk>
	I0923 13:44:24.779504  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)     <interface type='network'>
	I0923 13:44:24.779517  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)       <source network='mk-kubernetes-upgrade-678282'/>
	I0923 13:44:24.779524  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)       <model type='virtio'/>
	I0923 13:44:24.779561  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)     </interface>
	I0923 13:44:24.779583  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)     <interface type='network'>
	I0923 13:44:24.779596  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)       <source network='default'/>
	I0923 13:44:24.779607  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)       <model type='virtio'/>
	I0923 13:44:24.779618  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)     </interface>
	I0923 13:44:24.779634  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)     <serial type='pty'>
	I0923 13:44:24.779642  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)       <target port='0'/>
	I0923 13:44:24.779656  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)     </serial>
	I0923 13:44:24.779668  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)     <console type='pty'>
	I0923 13:44:24.779680  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)       <target type='serial' port='0'/>
	I0923 13:44:24.779690  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)     </console>
	I0923 13:44:24.779698  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)     <rng model='virtio'>
	I0923 13:44:24.779710  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)       <backend model='random'>/dev/random</backend>
	I0923 13:44:24.779719  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)     </rng>
	I0923 13:44:24.779747  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)     
	I0923 13:44:24.779765  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)     
	I0923 13:44:24.779776  706229 main.go:141] libmachine: (kubernetes-upgrade-678282)   </devices>
	I0923 13:44:24.779785  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) </domain>
	I0923 13:44:24.779794  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) 
	I0923 13:44:24.784824  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:1c:03:43 in network default
	I0923 13:44:24.785452  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Ensuring networks are active...
	I0923 13:44:24.785496  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:24.786257  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Ensuring network default is active
	I0923 13:44:24.786619  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Ensuring network mk-kubernetes-upgrade-678282 is active
	I0923 13:44:24.787163  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Getting domain xml...
	I0923 13:44:24.787835  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Creating domain...
	I0923 13:44:26.129605  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Waiting to get IP...
	I0923 13:44:26.130467  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:26.130957  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | unable to find current IP address of domain kubernetes-upgrade-678282 in network mk-kubernetes-upgrade-678282
	I0923 13:44:26.131036  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | I0923 13:44:26.130904  706288 retry.go:31] will retry after 195.299134ms: waiting for machine to come up
	I0923 13:44:26.328361  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:26.328858  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | unable to find current IP address of domain kubernetes-upgrade-678282 in network mk-kubernetes-upgrade-678282
	I0923 13:44:26.328880  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | I0923 13:44:26.328811  706288 retry.go:31] will retry after 352.791984ms: waiting for machine to come up
	I0923 13:44:26.683539  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:26.684212  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | unable to find current IP address of domain kubernetes-upgrade-678282 in network mk-kubernetes-upgrade-678282
	I0923 13:44:26.684248  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | I0923 13:44:26.684116  706288 retry.go:31] will retry after 435.169821ms: waiting for machine to come up
	I0923 13:44:27.120800  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:27.121344  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | unable to find current IP address of domain kubernetes-upgrade-678282 in network mk-kubernetes-upgrade-678282
	I0923 13:44:27.121374  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | I0923 13:44:27.121259  706288 retry.go:31] will retry after 480.605116ms: waiting for machine to come up
	I0923 13:44:27.604089  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:27.604557  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | unable to find current IP address of domain kubernetes-upgrade-678282 in network mk-kubernetes-upgrade-678282
	I0923 13:44:27.604649  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | I0923 13:44:27.604585  706288 retry.go:31] will retry after 720.331688ms: waiting for machine to come up
	I0923 13:44:28.326486  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:28.326950  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | unable to find current IP address of domain kubernetes-upgrade-678282 in network mk-kubernetes-upgrade-678282
	I0923 13:44:28.326981  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | I0923 13:44:28.326880  706288 retry.go:31] will retry after 680.021846ms: waiting for machine to come up
	I0923 13:44:29.008335  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:29.008914  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | unable to find current IP address of domain kubernetes-upgrade-678282 in network mk-kubernetes-upgrade-678282
	I0923 13:44:29.008955  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | I0923 13:44:29.008882  706288 retry.go:31] will retry after 791.075356ms: waiting for machine to come up
	I0923 13:44:29.801105  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:29.801685  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | unable to find current IP address of domain kubernetes-upgrade-678282 in network mk-kubernetes-upgrade-678282
	I0923 13:44:29.801714  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | I0923 13:44:29.801631  706288 retry.go:31] will retry after 1.157416819s: waiting for machine to come up
	I0923 13:44:30.961275  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:30.961870  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | unable to find current IP address of domain kubernetes-upgrade-678282 in network mk-kubernetes-upgrade-678282
	I0923 13:44:30.961902  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | I0923 13:44:30.961817  706288 retry.go:31] will retry after 1.858453922s: waiting for machine to come up
	I0923 13:44:32.823045  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:32.823426  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | unable to find current IP address of domain kubernetes-upgrade-678282 in network mk-kubernetes-upgrade-678282
	I0923 13:44:32.823459  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | I0923 13:44:32.823375  706288 retry.go:31] will retry after 1.976832689s: waiting for machine to come up
	I0923 13:44:34.801779  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:34.802335  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | unable to find current IP address of domain kubernetes-upgrade-678282 in network mk-kubernetes-upgrade-678282
	I0923 13:44:34.802373  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | I0923 13:44:34.802264  706288 retry.go:31] will retry after 1.892653515s: waiting for machine to come up
	I0923 13:44:36.697361  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:36.697878  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | unable to find current IP address of domain kubernetes-upgrade-678282 in network mk-kubernetes-upgrade-678282
	I0923 13:44:36.697915  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | I0923 13:44:36.697802  706288 retry.go:31] will retry after 2.631019637s: waiting for machine to come up
	I0923 13:44:39.331719  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:39.332099  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | unable to find current IP address of domain kubernetes-upgrade-678282 in network mk-kubernetes-upgrade-678282
	I0923 13:44:39.332127  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | I0923 13:44:39.332050  706288 retry.go:31] will retry after 4.543201719s: waiting for machine to come up
	I0923 13:44:43.878864  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:43.879256  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | unable to find current IP address of domain kubernetes-upgrade-678282 in network mk-kubernetes-upgrade-678282
	I0923 13:44:43.879292  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | I0923 13:44:43.879179  706288 retry.go:31] will retry after 5.119505224s: waiting for machine to come up
	I0923 13:44:49.004686  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:49.005203  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Found IP for machine: 192.168.39.215
	I0923 13:44:49.005236  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has current primary IP address 192.168.39.215 and MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:49.005256  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Reserving static IP address...
	I0923 13:44:49.005703  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-678282", mac: "52:54:00:00:31:f0", ip: "192.168.39.215"} in network mk-kubernetes-upgrade-678282
	I0923 13:44:49.085205  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | Getting to WaitForSSH function...
	I0923 13:44:49.085247  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Reserved static IP address: 192.168.39.215
	I0923 13:44:49.085270  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Waiting for SSH to be available...
	I0923 13:44:49.087743  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:49.088220  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:31:f0", ip: ""} in network mk-kubernetes-upgrade-678282: {Iface:virbr1 ExpiryTime:2024-09-23 14:44:39 +0000 UTC Type:0 Mac:52:54:00:00:31:f0 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:minikube Clientid:01:52:54:00:00:31:f0}
	I0923 13:44:49.088278  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined IP address 192.168.39.215 and MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:49.088397  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | Using SSH client type: external
	I0923 13:44:49.088440  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | Using SSH private key: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/kubernetes-upgrade-678282/id_rsa (-rw-------)
	I0923 13:44:49.088479  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19690-662205/.minikube/machines/kubernetes-upgrade-678282/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 13:44:49.088498  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | About to run SSH command:
	I0923 13:44:49.088515  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | exit 0
	I0923 13:44:49.218032  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | SSH cmd err, output: <nil>: 
	I0923 13:44:49.218266  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) KVM machine creation complete!
	I0923 13:44:49.218621  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetConfigRaw
	I0923 13:44:49.219205  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .DriverName
	I0923 13:44:49.219429  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .DriverName
	I0923 13:44:49.219567  706229 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 13:44:49.219579  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetState
	I0923 13:44:49.221017  706229 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 13:44:49.221032  706229 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 13:44:49.221039  706229 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 13:44:49.221047  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHHostname
	I0923 13:44:49.223310  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:49.223692  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:31:f0", ip: ""} in network mk-kubernetes-upgrade-678282: {Iface:virbr1 ExpiryTime:2024-09-23 14:44:39 +0000 UTC Type:0 Mac:52:54:00:00:31:f0 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:kubernetes-upgrade-678282 Clientid:01:52:54:00:00:31:f0}
	I0923 13:44:49.223718  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined IP address 192.168.39.215 and MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:49.223872  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHPort
	I0923 13:44:49.224071  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHKeyPath
	I0923 13:44:49.224235  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHKeyPath
	I0923 13:44:49.224407  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHUsername
	I0923 13:44:49.224586  706229 main.go:141] libmachine: Using SSH client type: native
	I0923 13:44:49.224816  706229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0923 13:44:49.224831  706229 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 13:44:49.333099  706229 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 13:44:49.333122  706229 main.go:141] libmachine: Detecting the provisioner...
	I0923 13:44:49.333129  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHHostname
	I0923 13:44:49.336077  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:49.336438  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:31:f0", ip: ""} in network mk-kubernetes-upgrade-678282: {Iface:virbr1 ExpiryTime:2024-09-23 14:44:39 +0000 UTC Type:0 Mac:52:54:00:00:31:f0 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:kubernetes-upgrade-678282 Clientid:01:52:54:00:00:31:f0}
	I0923 13:44:49.336465  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined IP address 192.168.39.215 and MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:49.336683  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHPort
	I0923 13:44:49.336859  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHKeyPath
	I0923 13:44:49.336996  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHKeyPath
	I0923 13:44:49.337084  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHUsername
	I0923 13:44:49.337197  706229 main.go:141] libmachine: Using SSH client type: native
	I0923 13:44:49.337420  706229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0923 13:44:49.337432  706229 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 13:44:49.446627  706229 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 13:44:49.446746  706229 main.go:141] libmachine: found compatible host: buildroot
	I0923 13:44:49.446760  706229 main.go:141] libmachine: Provisioning with buildroot...
	I0923 13:44:49.446772  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetMachineName
	I0923 13:44:49.447028  706229 buildroot.go:166] provisioning hostname "kubernetes-upgrade-678282"
	I0923 13:44:49.447058  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetMachineName
	I0923 13:44:49.447249  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHHostname
	I0923 13:44:49.450204  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:49.450589  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:31:f0", ip: ""} in network mk-kubernetes-upgrade-678282: {Iface:virbr1 ExpiryTime:2024-09-23 14:44:39 +0000 UTC Type:0 Mac:52:54:00:00:31:f0 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:kubernetes-upgrade-678282 Clientid:01:52:54:00:00:31:f0}
	I0923 13:44:49.450617  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined IP address 192.168.39.215 and MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:49.450799  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHPort
	I0923 13:44:49.451086  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHKeyPath
	I0923 13:44:49.451289  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHKeyPath
	I0923 13:44:49.451422  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHUsername
	I0923 13:44:49.451590  706229 main.go:141] libmachine: Using SSH client type: native
	I0923 13:44:49.451772  706229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0923 13:44:49.451784  706229 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-678282 && echo "kubernetes-upgrade-678282" | sudo tee /etc/hostname
	I0923 13:44:49.576847  706229 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-678282
	
	I0923 13:44:49.576875  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHHostname
	I0923 13:44:49.580714  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:49.581142  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:31:f0", ip: ""} in network mk-kubernetes-upgrade-678282: {Iface:virbr1 ExpiryTime:2024-09-23 14:44:39 +0000 UTC Type:0 Mac:52:54:00:00:31:f0 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:kubernetes-upgrade-678282 Clientid:01:52:54:00:00:31:f0}
	I0923 13:44:49.581179  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined IP address 192.168.39.215 and MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:49.581339  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHPort
	I0923 13:44:49.581541  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHKeyPath
	I0923 13:44:49.581695  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHKeyPath
	I0923 13:44:49.581819  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHUsername
	I0923 13:44:49.581999  706229 main.go:141] libmachine: Using SSH client type: native
	I0923 13:44:49.582192  706229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0923 13:44:49.582215  706229 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-678282' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-678282/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-678282' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 13:44:49.702918  706229 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 13:44:49.702950  706229 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19690-662205/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-662205/.minikube}
	I0923 13:44:49.702997  706229 buildroot.go:174] setting up certificates
	I0923 13:44:49.703011  706229 provision.go:84] configureAuth start
	I0923 13:44:49.703027  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetMachineName
	I0923 13:44:49.703346  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetIP
	I0923 13:44:49.706229  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:49.706712  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:31:f0", ip: ""} in network mk-kubernetes-upgrade-678282: {Iface:virbr1 ExpiryTime:2024-09-23 14:44:39 +0000 UTC Type:0 Mac:52:54:00:00:31:f0 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:kubernetes-upgrade-678282 Clientid:01:52:54:00:00:31:f0}
	I0923 13:44:49.706754  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined IP address 192.168.39.215 and MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:49.707007  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHHostname
	I0923 13:44:49.709386  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:49.709676  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:31:f0", ip: ""} in network mk-kubernetes-upgrade-678282: {Iface:virbr1 ExpiryTime:2024-09-23 14:44:39 +0000 UTC Type:0 Mac:52:54:00:00:31:f0 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:kubernetes-upgrade-678282 Clientid:01:52:54:00:00:31:f0}
	I0923 13:44:49.709719  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined IP address 192.168.39.215 and MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:49.709862  706229 provision.go:143] copyHostCerts
	I0923 13:44:49.709931  706229 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem, removing ...
	I0923 13:44:49.709957  706229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem
	I0923 13:44:49.710054  706229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/ca.pem (1082 bytes)
	I0923 13:44:49.710169  706229 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem, removing ...
	I0923 13:44:49.710178  706229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem
	I0923 13:44:49.710203  706229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/cert.pem (1123 bytes)
	I0923 13:44:49.710270  706229 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem, removing ...
	I0923 13:44:49.710277  706229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem
	I0923 13:44:49.710299  706229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-662205/.minikube/key.pem (1675 bytes)
	I0923 13:44:49.710402  706229 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-678282 san=[127.0.0.1 192.168.39.215 kubernetes-upgrade-678282 localhost minikube]
	I0923 13:44:49.816739  706229 provision.go:177] copyRemoteCerts
	I0923 13:44:49.816853  706229 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 13:44:49.816888  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHHostname
	I0923 13:44:49.819758  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:49.820397  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:31:f0", ip: ""} in network mk-kubernetes-upgrade-678282: {Iface:virbr1 ExpiryTime:2024-09-23 14:44:39 +0000 UTC Type:0 Mac:52:54:00:00:31:f0 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:kubernetes-upgrade-678282 Clientid:01:52:54:00:00:31:f0}
	I0923 13:44:49.820433  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined IP address 192.168.39.215 and MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:49.820658  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHPort
	I0923 13:44:49.820878  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHKeyPath
	I0923 13:44:49.821025  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHUsername
	I0923 13:44:49.821217  706229 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/kubernetes-upgrade-678282/id_rsa Username:docker}
	I0923 13:44:49.908770  706229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 13:44:49.934187  706229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0923 13:44:49.960008  706229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 13:44:49.986294  706229 provision.go:87] duration metric: took 283.266573ms to configureAuth
	I0923 13:44:49.986327  706229 buildroot.go:189] setting minikube options for container-runtime
	I0923 13:44:49.986508  706229 config.go:182] Loaded profile config "kubernetes-upgrade-678282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0923 13:44:49.986600  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHHostname
	I0923 13:44:49.989607  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:49.989995  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:31:f0", ip: ""} in network mk-kubernetes-upgrade-678282: {Iface:virbr1 ExpiryTime:2024-09-23 14:44:39 +0000 UTC Type:0 Mac:52:54:00:00:31:f0 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:kubernetes-upgrade-678282 Clientid:01:52:54:00:00:31:f0}
	I0923 13:44:49.990029  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined IP address 192.168.39.215 and MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:49.990236  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHPort
	I0923 13:44:49.990502  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHKeyPath
	I0923 13:44:49.990682  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHKeyPath
	I0923 13:44:49.990795  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHUsername
	I0923 13:44:49.990924  706229 main.go:141] libmachine: Using SSH client type: native
	I0923 13:44:49.991106  706229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0923 13:44:49.991127  706229 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 13:44:50.235190  706229 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 13:44:50.235223  706229 main.go:141] libmachine: Checking connection to Docker...
	I0923 13:44:50.235253  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetURL
	I0923 13:44:50.236705  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | Using libvirt version 6000000
	I0923 13:44:50.239351  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:50.239690  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:31:f0", ip: ""} in network mk-kubernetes-upgrade-678282: {Iface:virbr1 ExpiryTime:2024-09-23 14:44:39 +0000 UTC Type:0 Mac:52:54:00:00:31:f0 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:kubernetes-upgrade-678282 Clientid:01:52:54:00:00:31:f0}
	I0923 13:44:50.239715  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined IP address 192.168.39.215 and MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:50.239955  706229 main.go:141] libmachine: Docker is up and running!
	I0923 13:44:50.239970  706229 main.go:141] libmachine: Reticulating splines...
	I0923 13:44:50.239978  706229 client.go:171] duration metric: took 25.897584232s to LocalClient.Create
	I0923 13:44:50.240003  706229 start.go:167] duration metric: took 25.897670455s to libmachine.API.Create "kubernetes-upgrade-678282"
	I0923 13:44:50.240011  706229 start.go:293] postStartSetup for "kubernetes-upgrade-678282" (driver="kvm2")
	I0923 13:44:50.240021  706229 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 13:44:50.240055  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .DriverName
	I0923 13:44:50.240431  706229 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 13:44:50.240484  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHHostname
	I0923 13:44:50.242958  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:50.243364  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:31:f0", ip: ""} in network mk-kubernetes-upgrade-678282: {Iface:virbr1 ExpiryTime:2024-09-23 14:44:39 +0000 UTC Type:0 Mac:52:54:00:00:31:f0 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:kubernetes-upgrade-678282 Clientid:01:52:54:00:00:31:f0}
	I0923 13:44:50.243397  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined IP address 192.168.39.215 and MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:50.243501  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHPort
	I0923 13:44:50.243716  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHKeyPath
	I0923 13:44:50.243869  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHUsername
	I0923 13:44:50.244017  706229 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/kubernetes-upgrade-678282/id_rsa Username:docker}
	I0923 13:44:50.328402  706229 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 13:44:50.332677  706229 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 13:44:50.332724  706229 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/addons for local assets ...
	I0923 13:44:50.332818  706229 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-662205/.minikube/files for local assets ...
	I0923 13:44:50.332928  706229 filesync.go:149] local asset: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem -> 6694472.pem in /etc/ssl/certs
	I0923 13:44:50.333080  706229 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 13:44:50.342722  706229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 13:44:50.367144  706229 start.go:296] duration metric: took 127.11779ms for postStartSetup
	I0923 13:44:50.367212  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetConfigRaw
	I0923 13:44:50.367876  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetIP
	I0923 13:44:50.370669  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:50.371024  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:31:f0", ip: ""} in network mk-kubernetes-upgrade-678282: {Iface:virbr1 ExpiryTime:2024-09-23 14:44:39 +0000 UTC Type:0 Mac:52:54:00:00:31:f0 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:kubernetes-upgrade-678282 Clientid:01:52:54:00:00:31:f0}
	I0923 13:44:50.371056  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined IP address 192.168.39.215 and MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:50.371287  706229 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/config.json ...
	I0923 13:44:50.371539  706229 start.go:128] duration metric: took 26.051258885s to createHost
	I0923 13:44:50.371570  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHHostname
	I0923 13:44:50.374377  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:50.374715  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:31:f0", ip: ""} in network mk-kubernetes-upgrade-678282: {Iface:virbr1 ExpiryTime:2024-09-23 14:44:39 +0000 UTC Type:0 Mac:52:54:00:00:31:f0 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:kubernetes-upgrade-678282 Clientid:01:52:54:00:00:31:f0}
	I0923 13:44:50.374745  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined IP address 192.168.39.215 and MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:50.374952  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHPort
	I0923 13:44:50.375174  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHKeyPath
	I0923 13:44:50.375314  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHKeyPath
	I0923 13:44:50.375450  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHUsername
	I0923 13:44:50.375599  706229 main.go:141] libmachine: Using SSH client type: native
	I0923 13:44:50.375790  706229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0923 13:44:50.375811  706229 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 13:44:50.486851  706229 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727099090.465191139
	
	I0923 13:44:50.486881  706229 fix.go:216] guest clock: 1727099090.465191139
	I0923 13:44:50.486891  706229 fix.go:229] Guest: 2024-09-23 13:44:50.465191139 +0000 UTC Remote: 2024-09-23 13:44:50.371553628 +0000 UTC m=+26.187853595 (delta=93.637511ms)
	I0923 13:44:50.486924  706229 fix.go:200] guest clock delta is within tolerance: 93.637511ms
	I0923 13:44:50.486929  706229 start.go:83] releasing machines lock for "kubernetes-upgrade-678282", held for 26.166729879s
	I0923 13:44:50.486954  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .DriverName
	I0923 13:44:50.487284  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetIP
	I0923 13:44:50.490461  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:50.490845  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:31:f0", ip: ""} in network mk-kubernetes-upgrade-678282: {Iface:virbr1 ExpiryTime:2024-09-23 14:44:39 +0000 UTC Type:0 Mac:52:54:00:00:31:f0 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:kubernetes-upgrade-678282 Clientid:01:52:54:00:00:31:f0}
	I0923 13:44:50.490877  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined IP address 192.168.39.215 and MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:50.491146  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .DriverName
	I0923 13:44:50.491739  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .DriverName
	I0923 13:44:50.491929  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .DriverName
	I0923 13:44:50.492062  706229 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 13:44:50.492110  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHHostname
	I0923 13:44:50.492180  706229 ssh_runner.go:195] Run: cat /version.json
	I0923 13:44:50.492207  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHHostname
	I0923 13:44:50.495066  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:50.495106  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:50.495523  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:31:f0", ip: ""} in network mk-kubernetes-upgrade-678282: {Iface:virbr1 ExpiryTime:2024-09-23 14:44:39 +0000 UTC Type:0 Mac:52:54:00:00:31:f0 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:kubernetes-upgrade-678282 Clientid:01:52:54:00:00:31:f0}
	I0923 13:44:50.495566  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined IP address 192.168.39.215 and MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:50.495699  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHPort
	I0923 13:44:50.495751  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:31:f0", ip: ""} in network mk-kubernetes-upgrade-678282: {Iface:virbr1 ExpiryTime:2024-09-23 14:44:39 +0000 UTC Type:0 Mac:52:54:00:00:31:f0 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:kubernetes-upgrade-678282 Clientid:01:52:54:00:00:31:f0}
	I0923 13:44:50.495779  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined IP address 192.168.39.215 and MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:50.495926  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHPort
	I0923 13:44:50.495966  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHKeyPath
	I0923 13:44:50.496158  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHUsername
	I0923 13:44:50.496118  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHKeyPath
	I0923 13:44:50.496402  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetSSHUsername
	I0923 13:44:50.496409  706229 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/kubernetes-upgrade-678282/id_rsa Username:docker}
	I0923 13:44:50.496566  706229 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/kubernetes-upgrade-678282/id_rsa Username:docker}
	I0923 13:44:50.620080  706229 ssh_runner.go:195] Run: systemctl --version
	I0923 13:44:50.626910  706229 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 13:44:50.792404  706229 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 13:44:50.799543  706229 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 13:44:50.799631  706229 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:44:50.816051  706229 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 13:44:50.816083  706229 start.go:495] detecting cgroup driver to use...
	I0923 13:44:50.816147  706229 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 13:44:50.833708  706229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:44:50.850177  706229 docker.go:217] disabling cri-docker service (if available) ...
	I0923 13:44:50.850235  706229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 13:44:50.871181  706229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 13:44:50.886433  706229 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 13:44:51.008192  706229 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 13:44:51.176139  706229 docker.go:233] disabling docker service ...
	I0923 13:44:51.176224  706229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 13:44:51.190865  706229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 13:44:51.204496  706229 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 13:44:51.358052  706229 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 13:44:51.489041  706229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 13:44:51.503270  706229 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:44:51.522380  706229 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0923 13:44:51.522456  706229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:44:51.533181  706229 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 13:44:51.533262  706229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:44:51.543895  706229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:44:51.554188  706229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:44:51.565000  706229 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 13:44:51.582864  706229 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 13:44:51.593164  706229 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 13:44:51.593226  706229 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 13:44:51.608503  706229 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 13:44:51.618740  706229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:44:51.737195  706229 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 13:44:51.825683  706229 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 13:44:51.825756  706229 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 13:44:51.831711  706229 start.go:563] Will wait 60s for crictl version
	I0923 13:44:51.831776  706229 ssh_runner.go:195] Run: which crictl
	I0923 13:44:51.835530  706229 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 13:44:51.874767  706229 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 13:44:51.874867  706229 ssh_runner.go:195] Run: crio --version
	I0923 13:44:51.903957  706229 ssh_runner.go:195] Run: crio --version
	I0923 13:44:51.936906  706229 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0923 13:44:51.938187  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetIP
	I0923 13:44:51.941036  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:51.941424  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:31:f0", ip: ""} in network mk-kubernetes-upgrade-678282: {Iface:virbr1 ExpiryTime:2024-09-23 14:44:39 +0000 UTC Type:0 Mac:52:54:00:00:31:f0 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:kubernetes-upgrade-678282 Clientid:01:52:54:00:00:31:f0}
	I0923 13:44:51.941448  706229 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined IP address 192.168.39.215 and MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:44:51.941751  706229 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 13:44:51.947678  706229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:44:51.962625  706229 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-678282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-678282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 13:44:51.962772  706229 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0923 13:44:51.962833  706229 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 13:44:52.006167  706229 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0923 13:44:52.006251  706229 ssh_runner.go:195] Run: which lz4
	I0923 13:44:52.010364  706229 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 13:44:52.014662  706229 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 13:44:52.014715  706229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0923 13:44:53.692317  706229 crio.go:462] duration metric: took 1.681985096s to copy over tarball
	I0923 13:44:53.692423  706229 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 13:44:56.485653  706229 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.793185705s)
	I0923 13:44:56.485702  706229 crio.go:469] duration metric: took 2.793345185s to extract the tarball
	I0923 13:44:56.485713  706229 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 13:44:56.529675  706229 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 13:44:56.578757  706229 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0923 13:44:56.578789  706229 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0923 13:44:56.578891  706229 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 13:44:56.578895  706229 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0923 13:44:56.578908  706229 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0923 13:44:56.578938  706229 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0923 13:44:56.578921  706229 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0923 13:44:56.578974  706229 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0923 13:44:56.578970  706229 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0923 13:44:56.578982  706229 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0923 13:44:56.580871  706229 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0923 13:44:56.580946  706229 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0923 13:44:56.580957  706229 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0923 13:44:56.580957  706229 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0923 13:44:56.580980  706229 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0923 13:44:56.580981  706229 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 13:44:56.580880  706229 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0923 13:44:56.581034  706229 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0923 13:44:56.834984  706229 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0923 13:44:56.880126  706229 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0923 13:44:56.880175  706229 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0923 13:44:56.880223  706229 ssh_runner.go:195] Run: which crictl
	I0923 13:44:56.884679  706229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0923 13:44:56.890219  706229 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0923 13:44:56.907182  706229 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0923 13:44:56.909972  706229 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0923 13:44:56.926090  706229 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0923 13:44:56.930796  706229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0923 13:44:56.941926  706229 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0923 13:44:56.970462  706229 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0923 13:44:56.978524  706229 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0923 13:44:56.978594  706229 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0923 13:44:56.978650  706229 ssh_runner.go:195] Run: which crictl
	I0923 13:44:57.028846  706229 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0923 13:44:57.028914  706229 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0923 13:44:57.028969  706229 ssh_runner.go:195] Run: which crictl
	I0923 13:44:57.060522  706229 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0923 13:44:57.060577  706229 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0923 13:44:57.060651  706229 ssh_runner.go:195] Run: which crictl
	I0923 13:44:57.093917  706229 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0923 13:44:57.093974  706229 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0923 13:44:57.094016  706229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0923 13:44:57.094028  706229 ssh_runner.go:195] Run: which crictl
	I0923 13:44:57.100471  706229 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0923 13:44:57.100533  706229 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0923 13:44:57.100609  706229 ssh_runner.go:195] Run: which crictl
	I0923 13:44:57.111140  706229 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0923 13:44:57.111194  706229 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0923 13:44:57.111232  706229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0923 13:44:57.111251  706229 ssh_runner.go:195] Run: which crictl
	I0923 13:44:57.111261  706229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0923 13:44:57.111324  706229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0923 13:44:57.111330  706229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0923 13:44:57.176601  706229 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0923 13:44:57.176678  706229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0923 13:44:57.219218  706229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0923 13:44:57.230080  706229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0923 13:44:57.231272  706229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0923 13:44:57.231272  706229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0923 13:44:57.332194  706229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0923 13:44:57.332224  706229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0923 13:44:57.332223  706229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0923 13:44:57.332248  706229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0923 13:44:57.336161  706229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0923 13:44:57.336170  706229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0923 13:44:57.440846  706229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0923 13:44:57.460441  706229 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0923 13:44:57.461349  706229 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0923 13:44:57.461452  706229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0923 13:44:57.467254  706229 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0923 13:44:57.467267  706229 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0923 13:44:57.535015  706229 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0923 13:44:57.540398  706229 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0923 13:44:57.540426  706229 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0923 13:44:57.767556  706229 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 13:44:57.907470  706229 cache_images.go:92] duration metric: took 1.328656431s to LoadCachedImages
	W0923 13:44:57.907629  706229 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19690-662205/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19690-662205/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0923 13:44:57.907655  706229 kubeadm.go:934] updating node { 192.168.39.215 8443 v1.20.0 crio true true} ...
	I0923 13:44:57.907798  706229 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-678282 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-678282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 13:44:57.907898  706229 ssh_runner.go:195] Run: crio config
	I0923 13:44:57.955008  706229 cni.go:84] Creating CNI manager for ""
	I0923 13:44:57.955041  706229 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 13:44:57.955053  706229 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 13:44:57.955084  706229 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.215 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-678282 NodeName:kubernetes-upgrade-678282 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0923 13:44:57.955239  706229 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.215
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-678282"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.215
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.215"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 13:44:57.955303  706229 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0923 13:44:57.966080  706229 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 13:44:57.966186  706229 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 13:44:57.976543  706229 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0923 13:44:57.994971  706229 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 13:44:58.012031  706229 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0923 13:44:58.031295  706229 ssh_runner.go:195] Run: grep 192.168.39.215	control-plane.minikube.internal$ /etc/hosts
	I0923 13:44:58.035322  706229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.215	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:44:58.047437  706229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:44:58.158862  706229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:44:58.176021  706229 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282 for IP: 192.168.39.215
	I0923 13:44:58.176052  706229 certs.go:194] generating shared ca certs ...
	I0923 13:44:58.176076  706229 certs.go:226] acquiring lock for ca certs: {Name:mk5f47b34d40554f07f6507fea971236e4735d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:44:58.176268  706229 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key
	I0923 13:44:58.176324  706229 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key
	I0923 13:44:58.176337  706229 certs.go:256] generating profile certs ...
	I0923 13:44:58.176408  706229 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/client.key
	I0923 13:44:58.176454  706229 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/client.crt with IP's: []
	I0923 13:44:58.323402  706229 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/client.crt ...
	I0923 13:44:58.323435  706229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/client.crt: {Name:mk322afdab979367ea50be6dbb5fe56cf7fce785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:44:58.323615  706229 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/client.key ...
	I0923 13:44:58.323631  706229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/client.key: {Name:mk58a2d779bd06a4269c0bb6c38758c1c53144f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:44:58.323706  706229 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/apiserver.key.8cc3f96b
	I0923 13:44:58.323723  706229 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/apiserver.crt.8cc3f96b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.215]
	I0923 13:44:58.611781  706229 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/apiserver.crt.8cc3f96b ...
	I0923 13:44:58.611819  706229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/apiserver.crt.8cc3f96b: {Name:mk7d4c2b0f2c908f266ca34895095a5881b6071e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:44:58.611991  706229 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/apiserver.key.8cc3f96b ...
	I0923 13:44:58.612006  706229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/apiserver.key.8cc3f96b: {Name:mka5a3f70f05c408fb8682c132856a276354ad73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:44:58.612076  706229 certs.go:381] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/apiserver.crt.8cc3f96b -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/apiserver.crt
	I0923 13:44:58.612172  706229 certs.go:385] copying /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/apiserver.key.8cc3f96b -> /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/apiserver.key
	I0923 13:44:58.612242  706229 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/proxy-client.key
	I0923 13:44:58.612256  706229 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/proxy-client.crt with IP's: []
	I0923 13:44:58.893618  706229 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/proxy-client.crt ...
	I0923 13:44:58.893660  706229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/proxy-client.crt: {Name:mk66106d2923ec3f4224405a630f078bf6a9efcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:44:58.893885  706229 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/proxy-client.key ...
	I0923 13:44:58.893907  706229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/proxy-client.key: {Name:mk30002962f2733ab0b193fc74bb00bae8f18aa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:44:58.894131  706229 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem (1338 bytes)
	W0923 13:44:58.894178  706229 certs.go:480] ignoring /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447_empty.pem, impossibly tiny 0 bytes
	I0923 13:44:58.894194  706229 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 13:44:58.894226  706229 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem (1082 bytes)
	I0923 13:44:58.894257  706229 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem (1123 bytes)
	I0923 13:44:58.894291  706229 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem (1675 bytes)
	I0923 13:44:58.894352  706229 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 13:44:58.895000  706229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 13:44:58.923426  706229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 13:44:58.949345  706229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 13:44:58.976730  706229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 13:44:59.005417  706229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0923 13:44:59.035051  706229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 13:44:59.068723  706229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 13:44:59.096193  706229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 13:44:59.121850  706229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem --> /usr/share/ca-certificates/669447.pem (1338 bytes)
	I0923 13:44:59.148021  706229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /usr/share/ca-certificates/6694472.pem (1708 bytes)
	I0923 13:44:59.179207  706229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 13:44:59.206601  706229 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 13:44:59.224815  706229 ssh_runner.go:195] Run: openssl version
	I0923 13:44:59.230807  706229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 13:44:59.242287  706229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:44:59.247037  706229 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 12:28 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:44:59.247131  706229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:44:59.253684  706229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 13:44:59.265451  706229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669447.pem && ln -fs /usr/share/ca-certificates/669447.pem /etc/ssl/certs/669447.pem"
	I0923 13:44:59.276722  706229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669447.pem
	I0923 13:44:59.282280  706229 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 12:47 /usr/share/ca-certificates/669447.pem
	I0923 13:44:59.282380  706229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669447.pem
	I0923 13:44:59.290427  706229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/669447.pem /etc/ssl/certs/51391683.0"
	I0923 13:44:59.305450  706229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6694472.pem && ln -fs /usr/share/ca-certificates/6694472.pem /etc/ssl/certs/6694472.pem"
	I0923 13:44:59.316644  706229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6694472.pem
	I0923 13:44:59.321438  706229 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 12:47 /usr/share/ca-certificates/6694472.pem
	I0923 13:44:59.321520  706229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6694472.pem
	I0923 13:44:59.327619  706229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6694472.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 13:44:59.338995  706229 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 13:44:59.343401  706229 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 13:44:59.343459  706229 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-678282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-678282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:44:59.343554  706229 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 13:44:59.343603  706229 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 13:44:59.382842  706229 cri.go:89] found id: ""
	I0923 13:44:59.382932  706229 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 13:44:59.392922  706229 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 13:44:59.410112  706229 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 13:44:59.421908  706229 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 13:44:59.421939  706229 kubeadm.go:157] found existing configuration files:
	
	I0923 13:44:59.422006  706229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 13:44:59.431424  706229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 13:44:59.431485  706229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 13:44:59.441573  706229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 13:44:59.452742  706229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 13:44:59.452804  706229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 13:44:59.464533  706229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 13:44:59.475494  706229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 13:44:59.475581  706229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 13:44:59.487641  706229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 13:44:59.498866  706229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 13:44:59.498940  706229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 13:44:59.510360  706229 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 13:44:59.635775  706229 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0923 13:44:59.635894  706229 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 13:44:59.802284  706229 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 13:44:59.802432  706229 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 13:44:59.802609  706229 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0923 13:45:00.023137  706229 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 13:45:00.113248  706229 out.go:235]   - Generating certificates and keys ...
	I0923 13:45:00.113452  706229 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 13:45:00.113555  706229 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 13:45:00.131901  706229 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 13:45:00.248060  706229 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 13:45:00.418780  706229 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 13:45:00.858747  706229 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 13:45:01.022761  706229 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 13:45:01.023010  706229 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-678282 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	I0923 13:45:01.244360  706229 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 13:45:01.244620  706229 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-678282 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	I0923 13:45:01.337721  706229 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 13:45:01.478804  706229 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 13:45:01.588732  706229 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 13:45:01.588976  706229 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 13:45:01.677220  706229 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 13:45:01.873079  706229 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 13:45:02.065967  706229 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 13:45:02.324484  706229 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 13:45:02.340697  706229 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 13:45:02.342449  706229 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 13:45:02.342534  706229 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 13:45:02.471786  706229 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 13:45:02.474072  706229 out.go:235]   - Booting up control plane ...
	I0923 13:45:02.474218  706229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 13:45:02.478264  706229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 13:45:02.479215  706229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 13:45:02.480077  706229 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 13:45:02.484416  706229 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0923 13:45:42.480097  706229 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0923 13:45:42.480727  706229 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0923 13:45:42.480898  706229 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0923 13:45:47.481042  706229 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0923 13:45:47.481337  706229 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0923 13:45:57.480722  706229 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0923 13:45:57.480986  706229 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0923 13:46:17.480915  706229 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0923 13:46:17.481295  706229 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0923 13:46:57.482968  706229 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0923 13:46:57.483233  706229 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0923 13:46:57.483243  706229 kubeadm.go:310] 
	I0923 13:46:57.483293  706229 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0923 13:46:57.483341  706229 kubeadm.go:310] 		timed out waiting for the condition
	I0923 13:46:57.483348  706229 kubeadm.go:310] 
	I0923 13:46:57.483388  706229 kubeadm.go:310] 	This error is likely caused by:
	I0923 13:46:57.483432  706229 kubeadm.go:310] 		- The kubelet is not running
	I0923 13:46:57.483563  706229 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0923 13:46:57.483571  706229 kubeadm.go:310] 
	I0923 13:46:57.483698  706229 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0923 13:46:57.483738  706229 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0923 13:46:57.483777  706229 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0923 13:46:57.483785  706229 kubeadm.go:310] 
	I0923 13:46:57.483914  706229 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0923 13:46:57.484017  706229 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0923 13:46:57.484024  706229 kubeadm.go:310] 
	I0923 13:46:57.484146  706229 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0923 13:46:57.484251  706229 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0923 13:46:57.484350  706229 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0923 13:46:57.484436  706229 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0923 13:46:57.484443  706229 kubeadm.go:310] 
	I0923 13:46:57.485422  706229 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 13:46:57.485541  706229 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0923 13:46:57.485654  706229 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0923 13:46:57.485871  706229 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-678282 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-678282 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-678282 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-678282 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0923 13:46:57.485940  706229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0923 13:46:58.976656  706229 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.490677976s)
	I0923 13:46:58.976807  706229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:46:58.995133  706229 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 13:46:59.006400  706229 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 13:46:59.006432  706229 kubeadm.go:157] found existing configuration files:
	
	I0923 13:46:59.006501  706229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 13:46:59.016554  706229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 13:46:59.016630  706229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 13:46:59.027065  706229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 13:46:59.037276  706229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 13:46:59.037339  706229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 13:46:59.047767  706229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 13:46:59.057534  706229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 13:46:59.057599  706229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 13:46:59.068786  706229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 13:46:59.079197  706229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 13:46:59.079279  706229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 13:46:59.090726  706229 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 13:46:59.335463  706229 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 13:48:55.710328  706229 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0923 13:48:55.710496  706229 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0923 13:48:55.712246  706229 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0923 13:48:55.712304  706229 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 13:48:55.712417  706229 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 13:48:55.712571  706229 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 13:48:55.712710  706229 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0923 13:48:55.712795  706229 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 13:48:55.714743  706229 out.go:235]   - Generating certificates and keys ...
	I0923 13:48:55.714829  706229 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 13:48:55.714904  706229 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 13:48:55.715009  706229 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0923 13:48:55.715111  706229 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0923 13:48:55.715221  706229 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0923 13:48:55.715310  706229 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0923 13:48:55.715399  706229 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0923 13:48:55.715476  706229 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0923 13:48:55.715601  706229 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0923 13:48:55.715706  706229 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0923 13:48:55.715760  706229 kubeadm.go:310] [certs] Using the existing "sa" key
	I0923 13:48:55.715839  706229 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 13:48:55.715916  706229 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 13:48:55.715995  706229 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 13:48:55.716072  706229 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 13:48:55.716120  706229 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 13:48:55.716251  706229 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 13:48:55.716351  706229 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 13:48:55.716393  706229 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 13:48:55.716463  706229 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 13:48:55.718106  706229 out.go:235]   - Booting up control plane ...
	I0923 13:48:55.718195  706229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 13:48:55.718280  706229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 13:48:55.718369  706229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 13:48:55.718485  706229 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 13:48:55.718639  706229 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0923 13:48:55.718687  706229 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0923 13:48:55.718744  706229 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0923 13:48:55.718966  706229 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0923 13:48:55.719069  706229 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0923 13:48:55.719237  706229 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0923 13:48:55.719327  706229 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0923 13:48:55.719536  706229 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0923 13:48:55.719646  706229 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0923 13:48:55.719854  706229 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0923 13:48:55.719921  706229 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0923 13:48:55.720065  706229 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0923 13:48:55.720071  706229 kubeadm.go:310] 
	I0923 13:48:55.720120  706229 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0923 13:48:55.720179  706229 kubeadm.go:310] 		timed out waiting for the condition
	I0923 13:48:55.720189  706229 kubeadm.go:310] 
	I0923 13:48:55.720241  706229 kubeadm.go:310] 	This error is likely caused by:
	I0923 13:48:55.720290  706229 kubeadm.go:310] 		- The kubelet is not running
	I0923 13:48:55.720434  706229 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0923 13:48:55.720445  706229 kubeadm.go:310] 
	I0923 13:48:55.720537  706229 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0923 13:48:55.720569  706229 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0923 13:48:55.720599  706229 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0923 13:48:55.720605  706229 kubeadm.go:310] 
	I0923 13:48:55.720697  706229 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0923 13:48:55.720770  706229 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0923 13:48:55.720777  706229 kubeadm.go:310] 
	I0923 13:48:55.720874  706229 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0923 13:48:55.720964  706229 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0923 13:48:55.721046  706229 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0923 13:48:55.721143  706229 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0923 13:48:55.721218  706229 kubeadm.go:310] 
	I0923 13:48:55.721223  706229 kubeadm.go:394] duration metric: took 3m56.37776764s to StartCluster
	I0923 13:48:55.721271  706229 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0923 13:48:55.721328  706229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 13:48:55.767840  706229 cri.go:89] found id: ""
	I0923 13:48:55.767861  706229 logs.go:276] 0 containers: []
	W0923 13:48:55.767869  706229 logs.go:278] No container was found matching "kube-apiserver"
	I0923 13:48:55.767875  706229 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0923 13:48:55.767925  706229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 13:48:55.807316  706229 cri.go:89] found id: ""
	I0923 13:48:55.807351  706229 logs.go:276] 0 containers: []
	W0923 13:48:55.807361  706229 logs.go:278] No container was found matching "etcd"
	I0923 13:48:55.807369  706229 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0923 13:48:55.807437  706229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 13:48:55.842917  706229 cri.go:89] found id: ""
	I0923 13:48:55.842946  706229 logs.go:276] 0 containers: []
	W0923 13:48:55.842954  706229 logs.go:278] No container was found matching "coredns"
	I0923 13:48:55.842961  706229 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0923 13:48:55.843016  706229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 13:48:55.878556  706229 cri.go:89] found id: ""
	I0923 13:48:55.878594  706229 logs.go:276] 0 containers: []
	W0923 13:48:55.878605  706229 logs.go:278] No container was found matching "kube-scheduler"
	I0923 13:48:55.878614  706229 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0923 13:48:55.878687  706229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 13:48:55.913681  706229 cri.go:89] found id: ""
	I0923 13:48:55.913716  706229 logs.go:276] 0 containers: []
	W0923 13:48:55.913728  706229 logs.go:278] No container was found matching "kube-proxy"
	I0923 13:48:55.913736  706229 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 13:48:55.913804  706229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 13:48:55.950795  706229 cri.go:89] found id: ""
	I0923 13:48:55.950823  706229 logs.go:276] 0 containers: []
	W0923 13:48:55.950837  706229 logs.go:278] No container was found matching "kube-controller-manager"
	I0923 13:48:55.950844  706229 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0923 13:48:55.950902  706229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 13:48:55.989251  706229 cri.go:89] found id: ""
	I0923 13:48:55.989284  706229 logs.go:276] 0 containers: []
	W0923 13:48:55.989293  706229 logs.go:278] No container was found matching "kindnet"
	I0923 13:48:55.989306  706229 logs.go:123] Gathering logs for kubelet ...
	I0923 13:48:55.989321  706229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 13:48:56.048466  706229 logs.go:123] Gathering logs for dmesg ...
	I0923 13:48:56.048507  706229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 13:48:56.062289  706229 logs.go:123] Gathering logs for describe nodes ...
	I0923 13:48:56.062333  706229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0923 13:48:56.185408  706229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0923 13:48:56.185430  706229 logs.go:123] Gathering logs for CRI-O ...
	I0923 13:48:56.185444  706229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0923 13:48:56.298086  706229 logs.go:123] Gathering logs for container status ...
	I0923 13:48:56.298130  706229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0923 13:48:56.336137  706229 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0923 13:48:56.336219  706229 out.go:270] * 
	* 
	W0923 13:48:56.336286  706229 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0923 13:48:56.336299  706229 out.go:270] * 
	* 
	W0923 13:48:56.337394  706229 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 13:48:56.340809  706229 out.go:201] 
	W0923 13:48:56.342198  706229 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0923 13:48:56.342245  706229 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0923 13:48:56.342268  706229 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0923 13:48:56.343901  706229 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-678282 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-678282
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-678282: (1.752477861s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-678282 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-678282 status --format={{.Host}}: exit status 7 (78.386933ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-678282 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-678282 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m25.248189045s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-678282 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-678282 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-678282 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (93.735759ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-678282] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-662205/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-662205/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-678282
	    minikube start -p kubernetes-upgrade-678282 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6782822 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-678282 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-678282 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0923 13:50:29.177423  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:50:36.850527  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-678282 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (49.109129898s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-09-23 13:51:12.756574018 +0000 UTC m=+5006.191989503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-678282 -n kubernetes-upgrade-678282
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-678282 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-678282 logs -n 25: (1.786948294s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-488767 sudo find            | cilium-488767             | jenkins | v1.34.0 | 23 Sep 24 13:48 UTC |                     |
	|         | /etc/crio -type f -exec sh -c         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-488767 sudo crio            | cilium-488767             | jenkins | v1.34.0 | 23 Sep 24 13:48 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-488767                      | cilium-488767             | jenkins | v1.34.0 | 23 Sep 24 13:48 UTC | 23 Sep 24 13:48 UTC |
	| start   | -p pause-429220 --memory=2048         | pause-429220              | jenkins | v1.34.0 | 23 Sep 24 13:48 UTC | 23 Sep 24 13:49 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-861603             | cert-expiration-861603    | jenkins | v1.34.0 | 23 Sep 24 13:48 UTC | 23 Sep 24 13:49 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-640763           | force-systemd-env-640763  | jenkins | v1.34.0 | 23 Sep 24 13:48 UTC | 23 Sep 24 13:48 UTC |
	| start   | -p force-systemd-flag-354291          | force-systemd-flag-354291 | jenkins | v1.34.0 | 23 Sep 24 13:48 UTC | 23 Sep 24 13:50 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-678282          | kubernetes-upgrade-678282 | jenkins | v1.34.0 | 23 Sep 24 13:48 UTC | 23 Sep 24 13:48 UTC |
	| start   | -p kubernetes-upgrade-678282          | kubernetes-upgrade-678282 | jenkins | v1.34.0 | 23 Sep 24 13:48 UTC | 23 Sep 24 13:50 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-429220                       | pause-429220              | jenkins | v1.34.0 | 23 Sep 24 13:49 UTC | 23 Sep 24 13:50 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-354291 ssh cat     | force-systemd-flag-354291 | jenkins | v1.34.0 | 23 Sep 24 13:50 UTC | 23 Sep 24 13:50 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-354291          | force-systemd-flag-354291 | jenkins | v1.34.0 | 23 Sep 24 13:50 UTC | 23 Sep 24 13:50 UTC |
	| start   | -p cert-options-049900                | cert-options-049900       | jenkins | v1.34.0 | 23 Sep 24 13:50 UTC | 23 Sep 24 13:50 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-678282          | kubernetes-upgrade-678282 | jenkins | v1.34.0 | 23 Sep 24 13:50 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-678282          | kubernetes-upgrade-678282 | jenkins | v1.34.0 | 23 Sep 24 13:50 UTC | 23 Sep 24 13:51 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| pause   | -p pause-429220                       | pause-429220              | jenkins | v1.34.0 | 23 Sep 24 13:50 UTC | 23 Sep 24 13:50 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| unpause | -p pause-429220                       | pause-429220              | jenkins | v1.34.0 | 23 Sep 24 13:50 UTC | 23 Sep 24 13:50 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| pause   | -p pause-429220                       | pause-429220              | jenkins | v1.34.0 | 23 Sep 24 13:50 UTC | 23 Sep 24 13:50 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-429220                       | pause-429220              | jenkins | v1.34.0 | 23 Sep 24 13:50 UTC | 23 Sep 24 13:50 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-429220                       | pause-429220              | jenkins | v1.34.0 | 23 Sep 24 13:50 UTC | 23 Sep 24 13:50 UTC |
	| start   | -p auto-488767 --memory=3072          | auto-488767               | jenkins | v1.34.0 | 23 Sep 24 13:50 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-049900 ssh               | cert-options-049900       | jenkins | v1.34.0 | 23 Sep 24 13:51 UTC | 23 Sep 24 13:51 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-049900 -- sudo        | cert-options-049900       | jenkins | v1.34.0 | 23 Sep 24 13:51 UTC | 23 Sep 24 13:51 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-049900                | cert-options-049900       | jenkins | v1.34.0 | 23 Sep 24 13:51 UTC | 23 Sep 24 13:51 UTC |
	| start   | -p enable-default-cni-488767          | enable-default-cni-488767 | jenkins | v1.34.0 | 23 Sep 24 13:51 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --enable-default-cni=true             |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 13:51:01
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 13:51:01.542213  714451 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:51:01.542520  714451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:51:01.542531  714451 out.go:358] Setting ErrFile to fd 2...
	I0923 13:51:01.542536  714451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:51:01.542719  714451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-662205/.minikube/bin
	I0923 13:51:01.543390  714451 out.go:352] Setting JSON to false
	I0923 13:51:01.544433  714451 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":12805,"bootTime":1727086657,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 13:51:01.544560  714451 start.go:139] virtualization: kvm guest
	I0923 13:51:01.547179  714451 out.go:177] * [enable-default-cni-488767] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 13:51:01.549328  714451 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 13:51:01.549351  714451 notify.go:220] Checking for updates...
	I0923 13:51:01.552670  714451 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:51:01.554426  714451 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 13:51:01.556242  714451 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 13:51:01.558083  714451 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 13:51:01.559959  714451 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 13:51:01.562227  714451 config.go:182] Loaded profile config "auto-488767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:51:01.562354  714451 config.go:182] Loaded profile config "cert-expiration-861603": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:51:01.562480  714451 config.go:182] Loaded profile config "kubernetes-upgrade-678282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:51:01.562604  714451 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:51:01.607311  714451 out.go:177] * Using the kvm2 driver based on user configuration
	I0923 13:51:01.609220  714451 start.go:297] selected driver: kvm2
	I0923 13:51:01.609248  714451 start.go:901] validating driver "kvm2" against <nil>
	I0923 13:51:01.609269  714451 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 13:51:01.610150  714451 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 13:51:01.610246  714451 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19690-662205/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 13:51:01.627020  714451 install.go:137] /home/jenkins/minikube-integration/19690-662205/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0923 13:51:01.627076  714451 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0923 13:51:01.627303  714451 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0923 13:51:01.627329  714451 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:51:01.627356  714451 cni.go:84] Creating CNI manager for "bridge"
	I0923 13:51:01.627361  714451 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 13:51:01.627422  714451 start.go:340] cluster config:
	{Name:enable-default-cni-488767 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-488767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:51:01.627528  714451 iso.go:125] acquiring lock: {Name:mkb968a95eae3838cd5c328cf3385c2ef4ff2c8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 13:51:01.629581  714451 out.go:177] * Starting "enable-default-cni-488767" primary control-plane node in "enable-default-cni-488767" cluster
	I0923 13:51:02.380068  713584 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.221527652s)
	I0923 13:51:02.380107  713584 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 13:51:02.380168  713584 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 13:51:02.385884  713584 start.go:563] Will wait 60s for crictl version
	I0923 13:51:02.385956  713584 ssh_runner.go:195] Run: which crictl
	I0923 13:51:02.389902  713584 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 13:51:02.430990  713584 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 13:51:02.431078  713584 ssh_runner.go:195] Run: crio --version
	I0923 13:51:02.459479  713584 ssh_runner.go:195] Run: crio --version
	I0923 13:51:02.490146  713584 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 13:51:02.491326  713584 main.go:141] libmachine: (kubernetes-upgrade-678282) Calling .GetIP
	I0923 13:51:02.494662  713584 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:51:02.495109  713584 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:31:f0", ip: ""} in network mk-kubernetes-upgrade-678282: {Iface:virbr1 ExpiryTime:2024-09-23 14:49:57 +0000 UTC Type:0 Mac:52:54:00:00:31:f0 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:kubernetes-upgrade-678282 Clientid:01:52:54:00:00:31:f0}
	I0923 13:51:02.495139  713584 main.go:141] libmachine: (kubernetes-upgrade-678282) DBG | domain kubernetes-upgrade-678282 has defined IP address 192.168.39.215 and MAC address 52:54:00:00:31:f0 in network mk-kubernetes-upgrade-678282
	I0923 13:51:02.495351  713584 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 13:51:02.499647  713584 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-678282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:kubernetes-upgrade-678282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 13:51:02.499763  713584 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 13:51:02.499806  713584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 13:51:02.539925  713584 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 13:51:02.539954  713584 crio.go:433] Images already preloaded, skipping extraction
	I0923 13:51:02.540017  713584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 13:51:02.574643  713584 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 13:51:02.574671  713584 cache_images.go:84] Images are preloaded, skipping loading
	I0923 13:51:02.574679  713584 kubeadm.go:934] updating node { 192.168.39.215 8443 v1.31.1 crio true true} ...
	I0923 13:51:02.574804  713584 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-678282 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-678282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 13:51:02.574899  713584 ssh_runner.go:195] Run: crio config
	I0923 13:51:02.623022  713584 cni.go:84] Creating CNI manager for ""
	I0923 13:51:02.623053  713584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 13:51:02.623066  713584 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 13:51:02.623096  713584 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.215 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-678282 NodeName:kubernetes-upgrade-678282 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 13:51:02.623289  713584 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.215
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-678282"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.215
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.215"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 13:51:02.623374  713584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 13:51:02.633297  713584 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 13:51:02.633368  713584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 13:51:02.642683  713584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0923 13:51:02.659909  713584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 13:51:02.676832  713584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0923 13:51:02.693859  713584 ssh_runner.go:195] Run: grep 192.168.39.215	control-plane.minikube.internal$ /etc/hosts
	I0923 13:51:02.697770  713584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:51:02.838971  713584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:51:02.853487  713584 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282 for IP: 192.168.39.215
	I0923 13:51:02.853522  713584 certs.go:194] generating shared ca certs ...
	I0923 13:51:02.853548  713584 certs.go:226] acquiring lock for ca certs: {Name:mk5f47b34d40554f07f6507fea971236e4735d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:51:02.853750  713584 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key
	I0923 13:51:02.853788  713584 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key
	I0923 13:51:02.853798  713584 certs.go:256] generating profile certs ...
	I0923 13:51:02.853936  713584 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/client.key
	I0923 13:51:02.854012  713584 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/apiserver.key.8cc3f96b
	I0923 13:51:02.854067  713584 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/proxy-client.key
	I0923 13:51:02.854201  713584 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem (1338 bytes)
	W0923 13:51:02.854240  713584 certs.go:480] ignoring /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447_empty.pem, impossibly tiny 0 bytes
	I0923 13:51:02.854255  713584 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 13:51:02.854289  713584 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/ca.pem (1082 bytes)
	I0923 13:51:02.854319  713584 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/cert.pem (1123 bytes)
	I0923 13:51:02.854354  713584 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/certs/key.pem (1675 bytes)
	I0923 13:51:02.854409  713584 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem (1708 bytes)
	I0923 13:51:02.855016  713584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 13:51:02.882472  713584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 13:51:02.913032  713584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 13:51:02.940538  713584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 13:51:02.967083  713584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0923 13:51:02.993115  713584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 13:51:03.019412  713584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 13:51:03.044487  713584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/kubernetes-upgrade-678282/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 13:51:03.070856  713584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 13:51:03.097447  713584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/certs/669447.pem --> /usr/share/ca-certificates/669447.pem (1338 bytes)
	I0923 13:51:03.123057  713584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/ssl/certs/6694472.pem --> /usr/share/ca-certificates/6694472.pem (1708 bytes)
	I0923 13:51:03.147575  713584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 13:51:03.165737  713584 ssh_runner.go:195] Run: openssl version
	I0923 13:51:03.172443  713584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669447.pem && ln -fs /usr/share/ca-certificates/669447.pem /etc/ssl/certs/669447.pem"
	I0923 13:51:03.183928  713584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669447.pem
	I0923 13:51:03.188870  713584 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 12:47 /usr/share/ca-certificates/669447.pem
	I0923 13:51:03.188967  713584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669447.pem
	I0923 13:51:03.195841  713584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/669447.pem /etc/ssl/certs/51391683.0"
	I0923 13:51:03.206787  713584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6694472.pem && ln -fs /usr/share/ca-certificates/6694472.pem /etc/ssl/certs/6694472.pem"
	I0923 13:51:03.218150  713584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6694472.pem
	I0923 13:51:03.222589  713584 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 12:47 /usr/share/ca-certificates/6694472.pem
	I0923 13:51:03.222661  713584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6694472.pem
	I0923 13:51:03.228187  713584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6694472.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 13:51:03.238758  713584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 13:51:03.249751  713584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:51:03.254243  713584 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 12:28 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:51:03.254353  713584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:51:03.260180  713584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 13:51:03.270725  713584 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 13:51:03.275427  713584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 13:51:03.281155  713584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 13:51:03.286836  713584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 13:51:03.292432  713584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 13:51:03.298169  713584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 13:51:03.304852  713584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 13:51:03.310951  713584 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-678282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.1 ClusterName:kubernetes-upgrade-678282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:51:03.311070  713584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 13:51:03.311149  713584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 13:51:03.348140  713584 cri.go:89] found id: "e2129defd0dd421103080d63499100100a922f36a2bc2b34079def344edd369e"
	I0923 13:51:03.348164  713584 cri.go:89] found id: "ce6598d9c1a3015cf2add73aa3020c4d60af559fd99eb6fbe3596df742c3a0c7"
	I0923 13:51:03.348168  713584 cri.go:89] found id: "30b26f6cdd8c7c345b60ee9c0d378f374de0893f347c49c85554a62226d03eca"
	I0923 13:51:03.348185  713584 cri.go:89] found id: "9e6405a8bb95a0adf8558180b7df6a7d9ae6d80d8a41474bef6d4b242ce57c5e"
	I0923 13:51:03.348187  713584 cri.go:89] found id: "6dc3cf66a254406ca5e774bdf6ff6e91b43aeb25e455383d780584c63dbb6bd5"
	I0923 13:51:03.348191  713584 cri.go:89] found id: "197b794eaf4498aaca1b2fe1d2bbe8f322bb65ac7e97473b7af6945c30a56357"
	I0923 13:51:03.348193  713584 cri.go:89] found id: "d36227a90472bcc8b7fdaef75361c989c8479edd87d2a30c517d8cb52a45235e"
	I0923 13:51:03.348196  713584 cri.go:89] found id: "e506174330aa34efdb50e80ffcd9f7b78867183cd845937ef84457b46705a130"
	I0923 13:51:03.348199  713584 cri.go:89] found id: "898924ca6b8ea8e2327950a3d9ffa1002afd0a1441ae0c2c1454a489b92e0b90"
	I0923 13:51:03.348203  713584 cri.go:89] found id: "941b39f57c5b507c60c22df3c218ae65d3d87525c40279cc5303e5427c3f0d88"
	I0923 13:51:03.348205  713584 cri.go:89] found id: "6b12f3ff76249b1d3984cd6535e033c104affe3011617ff339c0754b3b99b25e"
	I0923 13:51:03.348207  713584 cri.go:89] found id: "7da9068eb15c8f17bd8990f708518697fb726fadd4f6646b835db41db5c456dd"
	I0923 13:51:03.348210  713584 cri.go:89] found id: "198eaab8c228eda2becb53bf6e41f0d36fc168ab193fa29aeadebd123b34d3ec"
	I0923 13:51:03.348212  713584 cri.go:89] found id: "d3fd39bd003fc6b5f7bfcf55b4834bfb16a97412a59e573555ea5b176da42ed3"
	I0923 13:51:03.348218  713584 cri.go:89] found id: "5dd637223d93dd53ad1feebbb1717551e3698d763d5c8e69a06965be2f562274"
	I0923 13:51:03.348220  713584 cri.go:89] found id: ""
	I0923 13:51:03.348261  713584 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 23 13:51:13 kubernetes-upgrade-678282 crio[3049]: time="2024-09-23 13:51:13.491070410Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727099473491049333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f83af78-0ac9-4c74-9d83-46a873dfcebd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:51:13 kubernetes-upgrade-678282 crio[3049]: time="2024-09-23 13:51:13.491526714Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c179796c-1561-4401-87a3-f513ac7d0dae name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:51:13 kubernetes-upgrade-678282 crio[3049]: time="2024-09-23 13:51:13.491578076Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c179796c-1561-4401-87a3-f513ac7d0dae name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:51:13 kubernetes-upgrade-678282 crio[3049]: time="2024-09-23 13:51:13.491936613Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d5c68197bfba5c80016a93aa9acd39871f34a5160bc7f86c5a194367d36d2cc,PodSandboxId:36bcf87d509ae65d4f443fbccad47ce83faa69ed815754ad036245836836038f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727099470406491357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gtnmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c77e82e2-b6fb-4c22-a518-81acc17a31f6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a42494fa68dd7493a9d36b732748d292a3df9645b5d5f15bf66544e89b4b5c,PodSandboxId:3018004767b76578e7cc3659555dba0a0fc86df34c8b9d383ac480ae223ee3f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727099470384585392,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2w6lx,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: c93af993-6b8c-4645-b682-ab222a8a43f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e20171a5f3c0b2d410ba27a1cbfa76d7f062312beceaeb4898c629ed7610d343,PodSandboxId:cf530d8641a8b62bdab99c93896affb608b6f8ace4c9ba3ab6949ed9fb853a41,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CON
TAINER_RUNNING,CreatedAt:1727099466540699605,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a98bc3ee9d3e4e319fd207452757fd5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f5e2b6885287f4fa410e4dfb08a3cd4b91a6aff77ef85acdd19ee40adf7152,PodSandboxId:f8b24dc31a59a73323c289ebffa390d5917004d86dce78203bf7011eed990022,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CON
TAINER_RUNNING,CreatedAt:1727099464203847614,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8c90098-67b5-49e8-8a14-ec1deb3ff55f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a253820f32261532ce9fe05dabe40b085b0b6a2438b0802dab253791f9440f2f,PodSandboxId:bf61ea459595ab32e86491a101ca6ad41ddb3f0e227bc8ca16c506436ed82ca8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,Create
dAt:1727099464136726046,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lcsxn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a59d20d9-753f-4981-ba22-6c55aa2a8969,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9787a283f40c89345631a1471975c72c58125c2d7e909850b70017bffce591c,PodSandboxId:857205edaf93e30dc661321f31dee6e7eb4c64359904a3974a4bd8182b30e87d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727099464115564865,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f384f7942e27af7a040b09fc19f4248,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e0bacbc97eebaa15a8d34d1459778f8f4bd28154e3e1602f29562666efaaf07,PodSandboxId:4ce8bb5da6c70e8a124d1674ccaf33fb3c366a1f1b18b07e995b6256f764abff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727099464084832657,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c977a6510f3bc4a88f0a6d8eed31183b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71736d1ae80b5607e3f7296e68194b234a6eaab58a0cf49dea774a28e6dd271,PodSandboxId:cf530d8641a8b62bdab99c93896affb608b6f8ace4c9ba3ab6949ed9fb853a41,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_CREATED,CreatedAt:1727099463999883332,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a98bc3ee9d3e4e319fd207452757fd5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ce7aa385c0b51686c0cb560592a310f1e43457006b8137f9190a1795877c64a,PodSandboxId:cfcf3c15739e1d7058e60431cadbfa2001de769421f40215d998645ca3c89c6b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727099463840147713,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2fbdec78e182332cae2fb826f3bcc37,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2129defd0dd421103080d63499100100a922f36a2bc2b34079def344edd369e,PodSandboxId:e3d690d9b2503c8f0a4b23ba9f27be03d185a24e5e839f9d2d4d1537f6b22e99,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727099451166762368,Labels:map[string]string{io.kubernet
es.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2w6lx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c93af993-6b8c-4645-b682-ab222a8a43f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce6598d9c1a3015cf2add73aa3020c4d60af559fd99eb6fbe3596df742c3a0c7,PodSandboxId:a9d84517cd0b886355455afcb496d4070a248047f48fed242a498b4241aea482,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727099450806400718,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gtnmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c77e82e2-b6fb-4c22-a518-81acc17a31f6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e6405a8bb95a0adf8558180b7df6a7d9ae6d80d8a41474bef6d4b242ce57c5e,PodSandboxId:104e58d2ebe01f4c317a945852570735f040760310b605036fa
15fa1f9b5da94,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727099449817097502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2fbdec78e182332cae2fb826f3bcc37,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dc3cf66a254406ca5e774bdf6ff6e91b43aeb25e455383d780584c63dbb6bd5,PodSandboxId:610a7188b85c870b538b9bf722663b
8d08fd924171479d1354d399b408c29fee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727099449630850172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c977a6510f3bc4a88f0a6d8eed31183b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197b794eaf4498aaca1b2fe1d2bbe8f322bb65ac7e97473b7af6945c30a56357,PodSandboxId:775ed54c4ab14eaf6ed7cb04f81dde757543942ec75d5ce5a4c5ce7997261d1f,M
etadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727099449584345016,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f384f7942e27af7a040b09fc19f4248,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e506174330aa34efdb50e80ffcd9f7b78867183cd845937ef84457b46705a130,PodSandboxId:e5e139f5fe0ed9270ffbc89c66c957c8d2605b8d4d1f0e4e341f99056614dd31,Metadat
a:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727099449460382970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8c90098-67b5-49e8-8a14-ec1deb3ff55f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36227a90472bcc8b7fdaef75361c989c8479edd87d2a30c517d8cb52a45235e,PodSandboxId:d853321f934bdaea8043d1f6d74e3375694048cce2eeffc966ad38ebc2d301d1,Metadata:&ContainerM
etadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727099449498252823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lcsxn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a59d20d9-753f-4981-ba22-6c55aa2a8969,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c179796c-1561-4401-87a3-f513ac7d0dae name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:51:13 kubernetes-upgrade-678282 crio[3049]: time="2024-09-23 13:51:13.545886452Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0c4f9fc3-fdcd-4fb2-8daf-1c60b13b7d32 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:51:13 kubernetes-upgrade-678282 crio[3049]: time="2024-09-23 13:51:13.545991445Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0c4f9fc3-fdcd-4fb2-8daf-1c60b13b7d32 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:51:13 kubernetes-upgrade-678282 crio[3049]: time="2024-09-23 13:51:13.547400801Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7f4f0d1c-feb8-4c88-9b81-0562cf8202e5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:51:13 kubernetes-upgrade-678282 crio[3049]: time="2024-09-23 13:51:13.548132019Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727099473547921723,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f4f0d1c-feb8-4c88-9b81-0562cf8202e5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:51:13 kubernetes-upgrade-678282 crio[3049]: time="2024-09-23 13:51:13.549002184Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc966243-bad9-413a-9263-5947e0e127c5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:51:13 kubernetes-upgrade-678282 crio[3049]: time="2024-09-23 13:51:13.549148341Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc966243-bad9-413a-9263-5947e0e127c5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:51:13 kubernetes-upgrade-678282 crio[3049]: time="2024-09-23 13:51:13.549628678Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d5c68197bfba5c80016a93aa9acd39871f34a5160bc7f86c5a194367d36d2cc,PodSandboxId:36bcf87d509ae65d4f443fbccad47ce83faa69ed815754ad036245836836038f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727099470406491357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gtnmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c77e82e2-b6fb-4c22-a518-81acc17a31f6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a42494fa68dd7493a9d36b732748d292a3df9645b5d5f15bf66544e89b4b5c,PodSandboxId:3018004767b76578e7cc3659555dba0a0fc86df34c8b9d383ac480ae223ee3f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727099470384585392,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2w6lx,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: c93af993-6b8c-4645-b682-ab222a8a43f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e20171a5f3c0b2d410ba27a1cbfa76d7f062312beceaeb4898c629ed7610d343,PodSandboxId:cf530d8641a8b62bdab99c93896affb608b6f8ace4c9ba3ab6949ed9fb853a41,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CON
TAINER_RUNNING,CreatedAt:1727099466540699605,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a98bc3ee9d3e4e319fd207452757fd5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f5e2b6885287f4fa410e4dfb08a3cd4b91a6aff77ef85acdd19ee40adf7152,PodSandboxId:f8b24dc31a59a73323c289ebffa390d5917004d86dce78203bf7011eed990022,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CON
TAINER_RUNNING,CreatedAt:1727099464203847614,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8c90098-67b5-49e8-8a14-ec1deb3ff55f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a253820f32261532ce9fe05dabe40b085b0b6a2438b0802dab253791f9440f2f,PodSandboxId:bf61ea459595ab32e86491a101ca6ad41ddb3f0e227bc8ca16c506436ed82ca8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,Create
dAt:1727099464136726046,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lcsxn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a59d20d9-753f-4981-ba22-6c55aa2a8969,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9787a283f40c89345631a1471975c72c58125c2d7e909850b70017bffce591c,PodSandboxId:857205edaf93e30dc661321f31dee6e7eb4c64359904a3974a4bd8182b30e87d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727099464115564865,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f384f7942e27af7a040b09fc19f4248,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e0bacbc97eebaa15a8d34d1459778f8f4bd28154e3e1602f29562666efaaf07,PodSandboxId:4ce8bb5da6c70e8a124d1674ccaf33fb3c366a1f1b18b07e995b6256f764abff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727099464084832657,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c977a6510f3bc4a88f0a6d8eed31183b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71736d1ae80b5607e3f7296e68194b234a6eaab58a0cf49dea774a28e6dd271,PodSandboxId:cf530d8641a8b62bdab99c93896affb608b6f8ace4c9ba3ab6949ed9fb853a41,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_CREATED,CreatedAt:1727099463999883332,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a98bc3ee9d3e4e319fd207452757fd5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ce7aa385c0b51686c0cb560592a310f1e43457006b8137f9190a1795877c64a,PodSandboxId:cfcf3c15739e1d7058e60431cadbfa2001de769421f40215d998645ca3c89c6b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727099463840147713,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2fbdec78e182332cae2fb826f3bcc37,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2129defd0dd421103080d63499100100a922f36a2bc2b34079def344edd369e,PodSandboxId:e3d690d9b2503c8f0a4b23ba9f27be03d185a24e5e839f9d2d4d1537f6b22e99,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727099451166762368,Labels:map[string]string{io.kubernet
es.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2w6lx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c93af993-6b8c-4645-b682-ab222a8a43f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce6598d9c1a3015cf2add73aa3020c4d60af559fd99eb6fbe3596df742c3a0c7,PodSandboxId:a9d84517cd0b886355455afcb496d4070a248047f48fed242a498b4241aea482,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727099450806400718,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gtnmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c77e82e2-b6fb-4c22-a518-81acc17a31f6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e6405a8bb95a0adf8558180b7df6a7d9ae6d80d8a41474bef6d4b242ce57c5e,PodSandboxId:104e58d2ebe01f4c317a945852570735f040760310b605036fa
15fa1f9b5da94,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727099449817097502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2fbdec78e182332cae2fb826f3bcc37,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dc3cf66a254406ca5e774bdf6ff6e91b43aeb25e455383d780584c63dbb6bd5,PodSandboxId:610a7188b85c870b538b9bf722663b
8d08fd924171479d1354d399b408c29fee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727099449630850172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c977a6510f3bc4a88f0a6d8eed31183b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197b794eaf4498aaca1b2fe1d2bbe8f322bb65ac7e97473b7af6945c30a56357,PodSandboxId:775ed54c4ab14eaf6ed7cb04f81dde757543942ec75d5ce5a4c5ce7997261d1f,M
etadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727099449584345016,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f384f7942e27af7a040b09fc19f4248,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e506174330aa34efdb50e80ffcd9f7b78867183cd845937ef84457b46705a130,PodSandboxId:e5e139f5fe0ed9270ffbc89c66c957c8d2605b8d4d1f0e4e341f99056614dd31,Metadat
a:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727099449460382970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8c90098-67b5-49e8-8a14-ec1deb3ff55f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36227a90472bcc8b7fdaef75361c989c8479edd87d2a30c517d8cb52a45235e,PodSandboxId:d853321f934bdaea8043d1f6d74e3375694048cce2eeffc966ad38ebc2d301d1,Metadata:&ContainerM
etadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727099449498252823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lcsxn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a59d20d9-753f-4981-ba22-6c55aa2a8969,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dc966243-bad9-413a-9263-5947e0e127c5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:51:13 kubernetes-upgrade-678282 crio[3049]: time="2024-09-23 13:51:13.598014183Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6dab5727-c7fc-42f4-b619-fab73cdb5281 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:51:13 kubernetes-upgrade-678282 crio[3049]: time="2024-09-23 13:51:13.598112780Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6dab5727-c7fc-42f4-b619-fab73cdb5281 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:51:13 kubernetes-upgrade-678282 crio[3049]: time="2024-09-23 13:51:13.600067235Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=56910623-9150-43f5-ba84-dc38935e46d6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:51:13 kubernetes-upgrade-678282 crio[3049]: time="2024-09-23 13:51:13.600620157Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727099473600589086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=56910623-9150-43f5-ba84-dc38935e46d6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:51:13 kubernetes-upgrade-678282 crio[3049]: time="2024-09-23 13:51:13.601384556Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=34f164a3-b1f7-4bf4-921e-2c85a008355a name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:51:13 kubernetes-upgrade-678282 crio[3049]: time="2024-09-23 13:51:13.601477738Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=34f164a3-b1f7-4bf4-921e-2c85a008355a name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:51:13 kubernetes-upgrade-678282 crio[3049]: time="2024-09-23 13:51:13.601930950Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d5c68197bfba5c80016a93aa9acd39871f34a5160bc7f86c5a194367d36d2cc,PodSandboxId:36bcf87d509ae65d4f443fbccad47ce83faa69ed815754ad036245836836038f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727099470406491357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gtnmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c77e82e2-b6fb-4c22-a518-81acc17a31f6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a42494fa68dd7493a9d36b732748d292a3df9645b5d5f15bf66544e89b4b5c,PodSandboxId:3018004767b76578e7cc3659555dba0a0fc86df34c8b9d383ac480ae223ee3f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727099470384585392,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2w6lx,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: c93af993-6b8c-4645-b682-ab222a8a43f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e20171a5f3c0b2d410ba27a1cbfa76d7f062312beceaeb4898c629ed7610d343,PodSandboxId:cf530d8641a8b62bdab99c93896affb608b6f8ace4c9ba3ab6949ed9fb853a41,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CON
TAINER_RUNNING,CreatedAt:1727099466540699605,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a98bc3ee9d3e4e319fd207452757fd5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f5e2b6885287f4fa410e4dfb08a3cd4b91a6aff77ef85acdd19ee40adf7152,PodSandboxId:f8b24dc31a59a73323c289ebffa390d5917004d86dce78203bf7011eed990022,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CON
TAINER_RUNNING,CreatedAt:1727099464203847614,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8c90098-67b5-49e8-8a14-ec1deb3ff55f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a253820f32261532ce9fe05dabe40b085b0b6a2438b0802dab253791f9440f2f,PodSandboxId:bf61ea459595ab32e86491a101ca6ad41ddb3f0e227bc8ca16c506436ed82ca8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,Create
dAt:1727099464136726046,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lcsxn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a59d20d9-753f-4981-ba22-6c55aa2a8969,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9787a283f40c89345631a1471975c72c58125c2d7e909850b70017bffce591c,PodSandboxId:857205edaf93e30dc661321f31dee6e7eb4c64359904a3974a4bd8182b30e87d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727099464115564865,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f384f7942e27af7a040b09fc19f4248,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e0bacbc97eebaa15a8d34d1459778f8f4bd28154e3e1602f29562666efaaf07,PodSandboxId:4ce8bb5da6c70e8a124d1674ccaf33fb3c366a1f1b18b07e995b6256f764abff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727099464084832657,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c977a6510f3bc4a88f0a6d8eed31183b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71736d1ae80b5607e3f7296e68194b234a6eaab58a0cf49dea774a28e6dd271,PodSandboxId:cf530d8641a8b62bdab99c93896affb608b6f8ace4c9ba3ab6949ed9fb853a41,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_CREATED,CreatedAt:1727099463999883332,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a98bc3ee9d3e4e319fd207452757fd5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ce7aa385c0b51686c0cb560592a310f1e43457006b8137f9190a1795877c64a,PodSandboxId:cfcf3c15739e1d7058e60431cadbfa2001de769421f40215d998645ca3c89c6b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727099463840147713,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2fbdec78e182332cae2fb826f3bcc37,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2129defd0dd421103080d63499100100a922f36a2bc2b34079def344edd369e,PodSandboxId:e3d690d9b2503c8f0a4b23ba9f27be03d185a24e5e839f9d2d4d1537f6b22e99,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727099451166762368,Labels:map[string]string{io.kubernet
es.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2w6lx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c93af993-6b8c-4645-b682-ab222a8a43f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce6598d9c1a3015cf2add73aa3020c4d60af559fd99eb6fbe3596df742c3a0c7,PodSandboxId:a9d84517cd0b886355455afcb496d4070a248047f48fed242a498b4241aea482,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727099450806400718,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gtnmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c77e82e2-b6fb-4c22-a518-81acc17a31f6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e6405a8bb95a0adf8558180b7df6a7d9ae6d80d8a41474bef6d4b242ce57c5e,PodSandboxId:104e58d2ebe01f4c317a945852570735f040760310b605036fa
15fa1f9b5da94,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727099449817097502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2fbdec78e182332cae2fb826f3bcc37,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dc3cf66a254406ca5e774bdf6ff6e91b43aeb25e455383d780584c63dbb6bd5,PodSandboxId:610a7188b85c870b538b9bf722663b
8d08fd924171479d1354d399b408c29fee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727099449630850172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c977a6510f3bc4a88f0a6d8eed31183b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197b794eaf4498aaca1b2fe1d2bbe8f322bb65ac7e97473b7af6945c30a56357,PodSandboxId:775ed54c4ab14eaf6ed7cb04f81dde757543942ec75d5ce5a4c5ce7997261d1f,M
etadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727099449584345016,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f384f7942e27af7a040b09fc19f4248,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e506174330aa34efdb50e80ffcd9f7b78867183cd845937ef84457b46705a130,PodSandboxId:e5e139f5fe0ed9270ffbc89c66c957c8d2605b8d4d1f0e4e341f99056614dd31,Metadat
a:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727099449460382970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8c90098-67b5-49e8-8a14-ec1deb3ff55f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36227a90472bcc8b7fdaef75361c989c8479edd87d2a30c517d8cb52a45235e,PodSandboxId:d853321f934bdaea8043d1f6d74e3375694048cce2eeffc966ad38ebc2d301d1,Metadata:&ContainerM
etadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727099449498252823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lcsxn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a59d20d9-753f-4981-ba22-6c55aa2a8969,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=34f164a3-b1f7-4bf4-921e-2c85a008355a name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:51:13 kubernetes-upgrade-678282 crio[3049]: time="2024-09-23 13:51:13.646088685Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=221314fe-bf1d-4929-a522-9b8a95ac2e73 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:51:13 kubernetes-upgrade-678282 crio[3049]: time="2024-09-23 13:51:13.646273876Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=221314fe-bf1d-4929-a522-9b8a95ac2e73 name=/runtime.v1.RuntimeService/Version
	Sep 23 13:51:13 kubernetes-upgrade-678282 crio[3049]: time="2024-09-23 13:51:13.648599227Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=33982adb-ef0c-4440-ada0-b2d73fbf8029 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:51:13 kubernetes-upgrade-678282 crio[3049]: time="2024-09-23 13:51:13.648984358Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727099473648961448,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=33982adb-ef0c-4440-ada0-b2d73fbf8029 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 13:51:13 kubernetes-upgrade-678282 crio[3049]: time="2024-09-23 13:51:13.650069853Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ebf107fb-6e77-4456-9bdb-7d6ccd31acb3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:51:13 kubernetes-upgrade-678282 crio[3049]: time="2024-09-23 13:51:13.650142446Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ebf107fb-6e77-4456-9bdb-7d6ccd31acb3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 13:51:13 kubernetes-upgrade-678282 crio[3049]: time="2024-09-23 13:51:13.650658136Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d5c68197bfba5c80016a93aa9acd39871f34a5160bc7f86c5a194367d36d2cc,PodSandboxId:36bcf87d509ae65d4f443fbccad47ce83faa69ed815754ad036245836836038f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727099470406491357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gtnmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c77e82e2-b6fb-4c22-a518-81acc17a31f6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a42494fa68dd7493a9d36b732748d292a3df9645b5d5f15bf66544e89b4b5c,PodSandboxId:3018004767b76578e7cc3659555dba0a0fc86df34c8b9d383ac480ae223ee3f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727099470384585392,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2w6lx,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: c93af993-6b8c-4645-b682-ab222a8a43f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e20171a5f3c0b2d410ba27a1cbfa76d7f062312beceaeb4898c629ed7610d343,PodSandboxId:cf530d8641a8b62bdab99c93896affb608b6f8ace4c9ba3ab6949ed9fb853a41,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CON
TAINER_RUNNING,CreatedAt:1727099466540699605,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a98bc3ee9d3e4e319fd207452757fd5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f5e2b6885287f4fa410e4dfb08a3cd4b91a6aff77ef85acdd19ee40adf7152,PodSandboxId:f8b24dc31a59a73323c289ebffa390d5917004d86dce78203bf7011eed990022,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CON
TAINER_RUNNING,CreatedAt:1727099464203847614,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8c90098-67b5-49e8-8a14-ec1deb3ff55f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a253820f32261532ce9fe05dabe40b085b0b6a2438b0802dab253791f9440f2f,PodSandboxId:bf61ea459595ab32e86491a101ca6ad41ddb3f0e227bc8ca16c506436ed82ca8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,Create
dAt:1727099464136726046,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lcsxn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a59d20d9-753f-4981-ba22-6c55aa2a8969,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9787a283f40c89345631a1471975c72c58125c2d7e909850b70017bffce591c,PodSandboxId:857205edaf93e30dc661321f31dee6e7eb4c64359904a3974a4bd8182b30e87d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727099464115564865,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f384f7942e27af7a040b09fc19f4248,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e0bacbc97eebaa15a8d34d1459778f8f4bd28154e3e1602f29562666efaaf07,PodSandboxId:4ce8bb5da6c70e8a124d1674ccaf33fb3c366a1f1b18b07e995b6256f764abff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727099464084832657,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c977a6510f3bc4a88f0a6d8eed31183b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71736d1ae80b5607e3f7296e68194b234a6eaab58a0cf49dea774a28e6dd271,PodSandboxId:cf530d8641a8b62bdab99c93896affb608b6f8ace4c9ba3ab6949ed9fb853a41,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_CREATED,CreatedAt:1727099463999883332,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a98bc3ee9d3e4e319fd207452757fd5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ce7aa385c0b51686c0cb560592a310f1e43457006b8137f9190a1795877c64a,PodSandboxId:cfcf3c15739e1d7058e60431cadbfa2001de769421f40215d998645ca3c89c6b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727099463840147713,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2fbdec78e182332cae2fb826f3bcc37,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2129defd0dd421103080d63499100100a922f36a2bc2b34079def344edd369e,PodSandboxId:e3d690d9b2503c8f0a4b23ba9f27be03d185a24e5e839f9d2d4d1537f6b22e99,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727099451166762368,Labels:map[string]string{io.kubernet
es.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2w6lx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c93af993-6b8c-4645-b682-ab222a8a43f1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce6598d9c1a3015cf2add73aa3020c4d60af559fd99eb6fbe3596df742c3a0c7,PodSandboxId:a9d84517cd0b886355455afcb496d4070a248047f48fed242a498b4241aea482,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727099450806400718,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gtnmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c77e82e2-b6fb-4c22-a518-81acc17a31f6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e6405a8bb95a0adf8558180b7df6a7d9ae6d80d8a41474bef6d4b242ce57c5e,PodSandboxId:104e58d2ebe01f4c317a945852570735f040760310b605036fa
15fa1f9b5da94,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727099449817097502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2fbdec78e182332cae2fb826f3bcc37,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dc3cf66a254406ca5e774bdf6ff6e91b43aeb25e455383d780584c63dbb6bd5,PodSandboxId:610a7188b85c870b538b9bf722663b
8d08fd924171479d1354d399b408c29fee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727099449630850172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c977a6510f3bc4a88f0a6d8eed31183b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197b794eaf4498aaca1b2fe1d2bbe8f322bb65ac7e97473b7af6945c30a56357,PodSandboxId:775ed54c4ab14eaf6ed7cb04f81dde757543942ec75d5ce5a4c5ce7997261d1f,M
etadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727099449584345016,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-678282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f384f7942e27af7a040b09fc19f4248,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e506174330aa34efdb50e80ffcd9f7b78867183cd845937ef84457b46705a130,PodSandboxId:e5e139f5fe0ed9270ffbc89c66c957c8d2605b8d4d1f0e4e341f99056614dd31,Metadat
a:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727099449460382970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8c90098-67b5-49e8-8a14-ec1deb3ff55f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36227a90472bcc8b7fdaef75361c989c8479edd87d2a30c517d8cb52a45235e,PodSandboxId:d853321f934bdaea8043d1f6d74e3375694048cce2eeffc966ad38ebc2d301d1,Metadata:&ContainerM
etadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727099449498252823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lcsxn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a59d20d9-753f-4981-ba22-6c55aa2a8969,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ebf107fb-6e77-4456-9bdb-7d6ccd31acb3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4d5c68197bfba       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   36bcf87d509ae       coredns-7c65d6cfc9-gtnmk
	a6a42494fa68d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   3018004767b76       coredns-7c65d6cfc9-2w6lx
	e20171a5f3c0b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   7 seconds ago       Running             kube-scheduler            3                   cf530d8641a8b       kube-scheduler-kubernetes-upgrade-678282
	23f5e2b688528       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 seconds ago       Running             storage-provisioner       2                   f8b24dc31a59a       storage-provisioner
	a253820f32261       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 seconds ago       Running             kube-proxy                2                   bf61ea459595a       kube-proxy-lcsxn
	d9787a283f40c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 seconds ago       Running             kube-apiserver            2                   857205edaf93e       kube-apiserver-kubernetes-upgrade-678282
	1e0bacbc97eeb       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 seconds ago       Running             etcd                      2                   4ce8bb5da6c70       etcd-kubernetes-upgrade-678282
	d71736d1ae80b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 seconds ago       Created             kube-scheduler            2                   cf530d8641a8b       kube-scheduler-kubernetes-upgrade-678282
	2ce7aa385c0b5       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 seconds ago       Running             kube-controller-manager   2                   cfcf3c15739e1       kube-controller-manager-kubernetes-upgrade-678282
	e2129defd0dd4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   22 seconds ago      Exited              coredns                   1                   e3d690d9b2503       coredns-7c65d6cfc9-2w6lx
	ce6598d9c1a30       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   22 seconds ago      Exited              coredns                   1                   a9d84517cd0b8       coredns-7c65d6cfc9-gtnmk
	9e6405a8bb95a       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   23 seconds ago      Exited              kube-controller-manager   1                   104e58d2ebe01       kube-controller-manager-kubernetes-upgrade-678282
	6dc3cf66a2544       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   24 seconds ago      Exited              etcd                      1                   610a7188b85c8       etcd-kubernetes-upgrade-678282
	197b794eaf449       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   24 seconds ago      Exited              kube-apiserver            1                   775ed54c4ab14       kube-apiserver-kubernetes-upgrade-678282
	d36227a90472b       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   24 seconds ago      Exited              kube-proxy                1                   d853321f934bd       kube-proxy-lcsxn
	e506174330aa3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   24 seconds ago      Exited              storage-provisioner       1                   e5e139f5fe0ed       storage-provisioner
	
	
	==> coredns [4d5c68197bfba5c80016a93aa9acd39871f34a5160bc7f86c5a194367d36d2cc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [a6a42494fa68dd7493a9d36b732748d292a3df9645b5d5f15bf66544e89b4b5c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [ce6598d9c1a3015cf2add73aa3020c4d60af559fd99eb6fbe3596df742c3a0c7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e2129defd0dd421103080d63499100100a922f36a2bc2b34079def344edd369e] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-678282
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-678282
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 13:50:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-678282
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:51:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:51:09 +0000   Mon, 23 Sep 2024 13:50:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:51:09 +0000   Mon, 23 Sep 2024 13:50:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:51:09 +0000   Mon, 23 Sep 2024 13:50:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:51:09 +0000   Mon, 23 Sep 2024 13:50:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    kubernetes-upgrade-678282
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3de6ce543f1f4d6491b32658d259404c
	  System UUID:                3de6ce54-3f1f-4d64-91b3-2658d259404c
	  Boot ID:                    622621f2-1b6a-43a1-ba61-8b175ae3c986
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-2w6lx                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     48s
	  kube-system                 coredns-7c65d6cfc9-gtnmk                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     48s
	  kube-system                 etcd-kubernetes-upgrade-678282                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         52s
	  kube-system                 kube-apiserver-kubernetes-upgrade-678282             250m (12%)    0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-678282    200m (10%)    0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 kube-proxy-lcsxn                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 kube-scheduler-kubernetes-upgrade-678282             100m (5%)     0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 47s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  59s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    59s (x8 over 60s)  kubelet          Node kubernetes-upgrade-678282 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x7 over 60s)  kubelet          Node kubernetes-upgrade-678282 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  59s (x8 over 60s)  kubelet          Node kubernetes-upgrade-678282 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           49s                node-controller  Node kubernetes-upgrade-678282 event: Registered Node kubernetes-upgrade-678282 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-678282 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-678282 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet          Node kubernetes-upgrade-678282 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                 node-controller  Node kubernetes-upgrade-678282 event: Registered Node kubernetes-upgrade-678282 in Controller
	
	
	==> dmesg <==
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep23 13:50] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.063735] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059754] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.175886] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.171802] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.292730] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +4.191123] systemd-fstab-generator[719]: Ignoring "noauto" option for root device
	[  +2.260263] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.087023] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.999808] systemd-fstab-generator[1233]: Ignoring "noauto" option for root device
	[  +0.085677] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.666391] kauditd_printk_skb: 65 callbacks suppressed
	[ +20.788004] kauditd_printk_skb: 34 callbacks suppressed
	[  +1.455580] systemd-fstab-generator[2841]: Ignoring "noauto" option for root device
	[  +0.258126] systemd-fstab-generator[2870]: Ignoring "noauto" option for root device
	[  +0.387983] systemd-fstab-generator[2950]: Ignoring "noauto" option for root device
	[  +0.281828] systemd-fstab-generator[2994]: Ignoring "noauto" option for root device
	[  +0.445280] systemd-fstab-generator[3041]: Ignoring "noauto" option for root device
	[Sep23 13:51] systemd-fstab-generator[3358]: Ignoring "noauto" option for root device
	[  +0.090390] kauditd_printk_skb: 207 callbacks suppressed
	[  +3.010444] systemd-fstab-generator[4095]: Ignoring "noauto" option for root device
	[  +4.687435] kauditd_printk_skb: 152 callbacks suppressed
	[  +1.124946] systemd-fstab-generator[4519]: Ignoring "noauto" option for root device
	
	
	==> etcd [1e0bacbc97eebaa15a8d34d1459778f8f4bd28154e3e1602f29562666efaaf07] <==
	{"level":"info","ts":"2024-09-23T13:51:06.662433Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4cd5d1376c5e8c88","local-member-id":"ce9e8f286885b37e","added-peer-id":"ce9e8f286885b37e","added-peer-peer-urls":["https://192.168.39.215:2380"]}
	{"level":"info","ts":"2024-09-23T13:51:06.662531Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4cd5d1376c5e8c88","local-member-id":"ce9e8f286885b37e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:51:06.662566Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:51:06.664824Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T13:51:06.667850Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-23T13:51:06.668039Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.215:2380"}
	{"level":"info","ts":"2024-09-23T13:51:06.668049Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.215:2380"}
	{"level":"info","ts":"2024-09-23T13:51:06.669114Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"ce9e8f286885b37e","initial-advertise-peer-urls":["https://192.168.39.215:2380"],"listen-peer-urls":["https://192.168.39.215:2380"],"advertise-client-urls":["https://192.168.39.215:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.215:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-23T13:51:06.669183Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-23T13:51:08.043724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce9e8f286885b37e is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-23T13:51:08.043784Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce9e8f286885b37e became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-23T13:51:08.043818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce9e8f286885b37e received MsgPreVoteResp from ce9e8f286885b37e at term 3"}
	{"level":"info","ts":"2024-09-23T13:51:08.043832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce9e8f286885b37e became candidate at term 4"}
	{"level":"info","ts":"2024-09-23T13:51:08.043838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce9e8f286885b37e received MsgVoteResp from ce9e8f286885b37e at term 4"}
	{"level":"info","ts":"2024-09-23T13:51:08.043848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce9e8f286885b37e became leader at term 4"}
	{"level":"info","ts":"2024-09-23T13:51:08.043855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ce9e8f286885b37e elected leader ce9e8f286885b37e at term 4"}
	{"level":"info","ts":"2024-09-23T13:51:08.046754Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ce9e8f286885b37e","local-member-attributes":"{Name:kubernetes-upgrade-678282 ClientURLs:[https://192.168.39.215:2379]}","request-path":"/0/members/ce9e8f286885b37e/attributes","cluster-id":"4cd5d1376c5e8c88","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T13:51:08.046978Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T13:51:08.047197Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T13:51:08.047247Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T13:51:08.047360Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T13:51:08.048127Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T13:51:08.048310Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T13:51:08.049108Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T13:51:08.049398Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.215:2379"}
	
	
	==> etcd [6dc3cf66a254406ca5e774bdf6ff6e91b43aeb25e455383d780584c63dbb6bd5] <==
	{"level":"info","ts":"2024-09-23T13:50:50.875502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce9e8f286885b37e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-23T13:50:50.875530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce9e8f286885b37e received MsgPreVoteResp from ce9e8f286885b37e at term 2"}
	{"level":"info","ts":"2024-09-23T13:50:50.875552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce9e8f286885b37e became candidate at term 3"}
	{"level":"info","ts":"2024-09-23T13:50:50.875560Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce9e8f286885b37e received MsgVoteResp from ce9e8f286885b37e at term 3"}
	{"level":"info","ts":"2024-09-23T13:50:50.875583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce9e8f286885b37e became leader at term 3"}
	{"level":"info","ts":"2024-09-23T13:50:50.875593Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ce9e8f286885b37e elected leader ce9e8f286885b37e at term 3"}
	{"level":"info","ts":"2024-09-23T13:50:50.888640Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ce9e8f286885b37e","local-member-attributes":"{Name:kubernetes-upgrade-678282 ClientURLs:[https://192.168.39.215:2379]}","request-path":"/0/members/ce9e8f286885b37e/attributes","cluster-id":"4cd5d1376c5e8c88","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T13:50:50.888701Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T13:50:50.889393Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T13:50:50.894560Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T13:50:50.895784Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.215:2379"}
	{"level":"info","ts":"2024-09-23T13:50:50.898076Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T13:50:50.905977Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T13:50:50.912202Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T13:50:50.912254Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T13:50:52.153373Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-23T13:50:52.153457Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-678282","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.215:2380"],"advertise-client-urls":["https://192.168.39.215:2379"]}
	{"level":"warn","ts":"2024-09-23T13:50:52.153644Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T13:50:52.153769Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T13:50:52.183036Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.215:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T13:50:52.183145Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.215:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-23T13:50:52.183360Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ce9e8f286885b37e","current-leader-member-id":"ce9e8f286885b37e"}
	{"level":"info","ts":"2024-09-23T13:50:52.194635Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.215:2380"}
	{"level":"info","ts":"2024-09-23T13:50:52.194932Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.215:2380"}
	{"level":"info","ts":"2024-09-23T13:50:52.195007Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-678282","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.215:2380"],"advertise-client-urls":["https://192.168.39.215:2379"]}
	
	
	==> kernel <==
	 13:51:14 up 1 min,  0 users,  load average: 2.56, 0.74, 0.26
	Linux kubernetes-upgrade-678282 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [197b794eaf4498aaca1b2fe1d2bbe8f322bb65ac7e97473b7af6945c30a56357] <==
	I0923 13:50:51.094730       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0923 13:50:51.217926       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0923 13:50:51.217968       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0923 13:50:51.228581       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0923 13:50:51.253316       1 instance.go:232] Using reconciler: lease
	I0923 13:50:51.550454       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	W0923 13:50:51.550593       1 genericapiserver.go:765] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	W0923 13:50:52.157119       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:50:52.157283       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:50:52.164946       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:50:52.166293       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:50:52.166385       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:50:52.166456       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:50:52.166541       1 logging.go:55] [core] [Channel #43 SubChannel #44]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:50:52.166646       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:50:52.166759       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:50:52.166828       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:50:52.166896       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:50:52.167568       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:50:52.167647       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:50:52.169690       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:50:52.169832       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:50:52.170051       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:50:52.170147       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 13:50:52.170280       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d9787a283f40c89345631a1471975c72c58125c2d7e909850b70017bffce591c] <==
	I0923 13:51:09.450306       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0923 13:51:09.450393       1 policy_source.go:224] refreshing policies
	I0923 13:51:09.450658       1 shared_informer.go:320] Caches are synced for configmaps
	I0923 13:51:09.453795       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0923 13:51:09.455794       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0923 13:51:09.455822       1 aggregator.go:171] initial CRD sync complete...
	I0923 13:51:09.455829       1 autoregister_controller.go:144] Starting autoregister controller
	I0923 13:51:09.455834       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0923 13:51:09.455839       1 cache.go:39] Caches are synced for autoregister controller
	I0923 13:51:09.455880       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0923 13:51:09.455923       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0923 13:51:09.455971       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0923 13:51:09.456066       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0923 13:51:09.456432       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0923 13:51:09.456643       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0923 13:51:09.462025       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0923 13:51:09.462912       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0923 13:51:10.256105       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0923 13:51:10.520771       1 controller.go:615] quota admission added evaluator for: endpoints
	I0923 13:51:11.317705       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0923 13:51:11.352217       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0923 13:51:11.434604       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0923 13:51:11.503989       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0923 13:51:11.513801       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0923 13:51:12.867497       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2ce7aa385c0b51686c0cb560592a310f1e43457006b8137f9190a1795877c64a] <==
	I0923 13:51:12.740509       1 shared_informer.go:320] Caches are synced for daemon sets
	I0923 13:51:12.754284       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0923 13:51:12.754778       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0923 13:51:12.754994       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="104.993µs"
	I0923 13:51:12.757239       1 shared_informer.go:320] Caches are synced for taint
	I0923 13:51:12.757318       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0923 13:51:12.757464       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-678282"
	I0923 13:51:12.757531       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0923 13:51:12.780817       1 shared_informer.go:320] Caches are synced for stateful set
	I0923 13:51:12.804331       1 shared_informer.go:320] Caches are synced for persistent volume
	I0923 13:51:12.804563       1 shared_informer.go:320] Caches are synced for GC
	I0923 13:51:12.804595       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0923 13:51:12.805434       1 shared_informer.go:320] Caches are synced for attach detach
	I0923 13:51:12.805950       1 shared_informer.go:320] Caches are synced for job
	I0923 13:51:12.810637       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0923 13:51:12.810673       1 shared_informer.go:320] Caches are synced for HPA
	I0923 13:51:12.813247       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0923 13:51:12.833456       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0923 13:51:12.833623       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-678282"
	I0923 13:51:12.860851       1 shared_informer.go:320] Caches are synced for endpoint
	I0923 13:51:12.861366       1 shared_informer.go:320] Caches are synced for resource quota
	I0923 13:51:12.861495       1 shared_informer.go:320] Caches are synced for resource quota
	I0923 13:51:13.319436       1 shared_informer.go:320] Caches are synced for garbage collector
	I0923 13:51:13.320538       1 shared_informer.go:320] Caches are synced for garbage collector
	I0923 13:51:13.320569       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [9e6405a8bb95a0adf8558180b7df6a7d9ae6d80d8a41474bef6d4b242ce57c5e] <==
	
	
	==> kube-proxy [a253820f32261532ce9fe05dabe40b085b0b6a2438b0802dab253791f9440f2f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 13:51:10.664743       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 13:51:10.679367       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.215"]
	E0923 13:51:10.679576       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 13:51:10.744267       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 13:51:10.744318       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 13:51:10.744345       1 server_linux.go:169] "Using iptables Proxier"
	I0923 13:51:10.749487       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 13:51:10.749810       1 server.go:483] "Version info" version="v1.31.1"
	I0923 13:51:10.749833       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:51:10.752462       1 config.go:199] "Starting service config controller"
	I0923 13:51:10.752502       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 13:51:10.752528       1 config.go:105] "Starting endpoint slice config controller"
	I0923 13:51:10.752531       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 13:51:10.753056       1 config.go:328] "Starting node config controller"
	I0923 13:51:10.753129       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 13:51:10.853120       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 13:51:10.853289       1 shared_informer.go:320] Caches are synced for node config
	I0923 13:51:10.853304       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [d36227a90472bcc8b7fdaef75361c989c8479edd87d2a30c517d8cb52a45235e] <==
	I0923 13:50:51.474559       1 server_linux.go:66] "Using iptables proxy"
	E0923 13:50:51.763401       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 13:50:51.989256       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	
	
	==> kube-scheduler [d71736d1ae80b5607e3f7296e68194b234a6eaab58a0cf49dea774a28e6dd271] <==
	
	
	==> kube-scheduler [e20171a5f3c0b2d410ba27a1cbfa76d7f062312beceaeb4898c629ed7610d343] <==
	I0923 13:51:07.285377       1 serving.go:386] Generated self-signed cert in-memory
	W0923 13:51:09.320386       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0923 13:51:09.320489       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0923 13:51:09.320518       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0923 13:51:09.320542       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0923 13:51:09.367856       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0923 13:51:09.367925       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:51:09.371313       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0923 13:51:09.371445       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 13:51:09.372144       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0923 13:51:09.372338       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0923 13:51:09.472700       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 13:51:06 kubernetes-upgrade-678282 kubelet[4102]: I0923 13:51:06.255109    4102 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b2fbdec78e182332cae2fb826f3bcc37-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-678282\" (UID: \"b2fbdec78e182332cae2fb826f3bcc37\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-678282"
	Sep 23 13:51:06 kubernetes-upgrade-678282 kubelet[4102]: I0923 13:51:06.255124    4102 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/c977a6510f3bc4a88f0a6d8eed31183b-etcd-data\") pod \"etcd-kubernetes-upgrade-678282\" (UID: \"c977a6510f3bc4a88f0a6d8eed31183b\") " pod="kube-system/etcd-kubernetes-upgrade-678282"
	Sep 23 13:51:06 kubernetes-upgrade-678282 kubelet[4102]: I0923 13:51:06.255144    4102 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f384f7942e27af7a040b09fc19f4248-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-678282\" (UID: \"7f384f7942e27af7a040b09fc19f4248\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-678282"
	Sep 23 13:51:06 kubernetes-upgrade-678282 kubelet[4102]: I0923 13:51:06.255201    4102 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f384f7942e27af7a040b09fc19f4248-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-678282\" (UID: \"7f384f7942e27af7a040b09fc19f4248\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-678282"
	Sep 23 13:51:06 kubernetes-upgrade-678282 kubelet[4102]: I0923 13:51:06.255221    4102 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/c977a6510f3bc4a88f0a6d8eed31183b-etcd-certs\") pod \"etcd-kubernetes-upgrade-678282\" (UID: \"c977a6510f3bc4a88f0a6d8eed31183b\") " pod="kube-system/etcd-kubernetes-upgrade-678282"
	Sep 23 13:51:06 kubernetes-upgrade-678282 kubelet[4102]: I0923 13:51:06.255236    4102 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b2fbdec78e182332cae2fb826f3bcc37-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-678282\" (UID: \"b2fbdec78e182332cae2fb826f3bcc37\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-678282"
	Sep 23 13:51:06 kubernetes-upgrade-678282 kubelet[4102]: I0923 13:51:06.255251    4102 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a98bc3ee9d3e4e319fd207452757fd5-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-678282\" (UID: \"6a98bc3ee9d3e4e319fd207452757fd5\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-678282"
	Sep 23 13:51:06 kubernetes-upgrade-678282 kubelet[4102]: I0923 13:51:06.451869    4102 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-678282"
	Sep 23 13:51:06 kubernetes-upgrade-678282 kubelet[4102]: E0923 13:51:06.452916    4102 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.215:8443: connect: connection refused" node="kubernetes-upgrade-678282"
	Sep 23 13:51:06 kubernetes-upgrade-678282 kubelet[4102]: I0923 13:51:06.527385    4102 scope.go:117] "RemoveContainer" containerID="d71736d1ae80b5607e3f7296e68194b234a6eaab58a0cf49dea774a28e6dd271"
	Sep 23 13:51:06 kubernetes-upgrade-678282 kubelet[4102]: I0923 13:51:06.528035    4102 scope.go:117] "RemoveContainer" containerID="6dc3cf66a254406ca5e774bdf6ff6e91b43aeb25e455383d780584c63dbb6bd5"
	Sep 23 13:51:06 kubernetes-upgrade-678282 kubelet[4102]: I0923 13:51:06.529376    4102 scope.go:117] "RemoveContainer" containerID="197b794eaf4498aaca1b2fe1d2bbe8f322bb65ac7e97473b7af6945c30a56357"
	Sep 23 13:51:06 kubernetes-upgrade-678282 kubelet[4102]: E0923 13:51:06.655185    4102 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-678282?timeout=10s\": dial tcp 192.168.39.215:8443: connect: connection refused" interval="800ms"
	Sep 23 13:51:06 kubernetes-upgrade-678282 kubelet[4102]: I0923 13:51:06.855253    4102 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-678282"
	Sep 23 13:51:09 kubernetes-upgrade-678282 kubelet[4102]: I0923 13:51:09.511377    4102 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-678282"
	Sep 23 13:51:09 kubernetes-upgrade-678282 kubelet[4102]: I0923 13:51:09.511822    4102 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-678282"
	Sep 23 13:51:09 kubernetes-upgrade-678282 kubelet[4102]: I0923 13:51:09.511953    4102 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 23 13:51:09 kubernetes-upgrade-678282 kubelet[4102]: I0923 13:51:09.513436    4102 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 23 13:51:10 kubernetes-upgrade-678282 kubelet[4102]: I0923 13:51:10.042809    4102 apiserver.go:52] "Watching apiserver"
	Sep 23 13:51:10 kubernetes-upgrade-678282 kubelet[4102]: I0923 13:51:10.049422    4102 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 23 13:51:10 kubernetes-upgrade-678282 kubelet[4102]: I0923 13:51:10.130967    4102 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a59d20d9-753f-4981-ba22-6c55aa2a8969-xtables-lock\") pod \"kube-proxy-lcsxn\" (UID: \"a59d20d9-753f-4981-ba22-6c55aa2a8969\") " pod="kube-system/kube-proxy-lcsxn"
	Sep 23 13:51:10 kubernetes-upgrade-678282 kubelet[4102]: I0923 13:51:10.131108    4102 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a59d20d9-753f-4981-ba22-6c55aa2a8969-lib-modules\") pod \"kube-proxy-lcsxn\" (UID: \"a59d20d9-753f-4981-ba22-6c55aa2a8969\") " pod="kube-system/kube-proxy-lcsxn"
	Sep 23 13:51:10 kubernetes-upgrade-678282 kubelet[4102]: I0923 13:51:10.131220    4102 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e8c90098-67b5-49e8-8a14-ec1deb3ff55f-tmp\") pod \"storage-provisioner\" (UID: \"e8c90098-67b5-49e8-8a14-ec1deb3ff55f\") " pod="kube-system/storage-provisioner"
	Sep 23 13:51:10 kubernetes-upgrade-678282 kubelet[4102]: I0923 13:51:10.348228    4102 scope.go:117] "RemoveContainer" containerID="d36227a90472bcc8b7fdaef75361c989c8479edd87d2a30c517d8cb52a45235e"
	Sep 23 13:51:10 kubernetes-upgrade-678282 kubelet[4102]: I0923 13:51:10.348584    4102 scope.go:117] "RemoveContainer" containerID="e506174330aa34efdb50e80ffcd9f7b78867183cd845937ef84457b46705a130"
	
	
	==> storage-provisioner [23f5e2b6885287f4fa410e4dfb08a3cd4b91a6aff77ef85acdd19ee40adf7152] <==
	I0923 13:51:10.432091       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 13:51:10.469094       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 13:51:10.472491       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 13:51:10.549083       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 13:51:10.549299       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-678282_9d4c2e21-0565-4180-ad90-dee6732ce2ab!
	I0923 13:51:10.549746       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"341c8c8a-633b-472c-ae88-0d21d37d0244", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-678282_9d4c2e21-0565-4180-ad90-dee6732ce2ab became leader
	I0923 13:51:10.653402       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-678282_9d4c2e21-0565-4180-ad90-dee6732ce2ab!
	
	
	==> storage-provisioner [e506174330aa34efdb50e80ffcd9f7b78867183cd845937ef84457b46705a130] <==
	I0923 13:50:50.010013       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0923 13:50:50.011464       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 13:51:13.110070  714625 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19690-662205/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-678282 -n kubernetes-upgrade-678282
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-678282 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-678282" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-678282
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-678282: (1.36104222s)
--- FAIL: TestKubernetesUpgrade (412.44s)

                                                
                                    

Test pass (226/274)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 24.01
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.1/json-events 12.52
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.14
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.61
22 TestOffline 78.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 131.71
31 TestAddons/serial/GCPAuth/Namespaces 0.16
35 TestAddons/parallel/InspektorGadget 11.99
38 TestAddons/parallel/CSI 53.68
39 TestAddons/parallel/Headlamp 19.03
40 TestAddons/parallel/CloudSpanner 5.58
41 TestAddons/parallel/LocalPath 12.14
42 TestAddons/parallel/NvidiaDevicePlugin 6.5
43 TestAddons/parallel/Yakd 10.75
44 TestAddons/StoppedEnableDisable 92.74
45 TestCertOptions 55.7
46 TestCertExpiration 302.1
48 TestForceSystemdFlag 73.87
49 TestForceSystemdEnv 68.26
51 TestKVMDriverInstallOrUpdate 6.16
55 TestErrorSpam/setup 42.69
56 TestErrorSpam/start 0.36
57 TestErrorSpam/status 0.76
58 TestErrorSpam/pause 1.59
59 TestErrorSpam/unpause 1.76
60 TestErrorSpam/stop 5.63
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 57.78
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 49.43
67 TestFunctional/serial/KubeContext 0.05
68 TestFunctional/serial/KubectlGetPods 0.08
71 TestFunctional/serial/CacheCmd/cache/add_remote 4.08
72 TestFunctional/serial/CacheCmd/cache/add_local 2.14
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.81
77 TestFunctional/serial/CacheCmd/cache/delete 0.1
78 TestFunctional/serial/MinikubeKubectlCmd 0.12
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
80 TestFunctional/serial/ExtraConfig 383.54
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 1.23
83 TestFunctional/serial/LogsFileCmd 1.22
84 TestFunctional/serial/InvalidService 4.37
86 TestFunctional/parallel/ConfigCmd 0.35
87 TestFunctional/parallel/DashboardCmd 27.07
88 TestFunctional/parallel/DryRun 0.31
89 TestFunctional/parallel/InternationalLanguage 0.15
90 TestFunctional/parallel/StatusCmd 1.05
94 TestFunctional/parallel/ServiceCmdConnect 11.89
95 TestFunctional/parallel/AddonsCmd 0.13
96 TestFunctional/parallel/PersistentVolumeClaim 51.29
98 TestFunctional/parallel/SSHCmd 0.4
99 TestFunctional/parallel/CpCmd 1.32
100 TestFunctional/parallel/MySQL 25.29
101 TestFunctional/parallel/FileSync 0.22
102 TestFunctional/parallel/CertSync 1.36
106 TestFunctional/parallel/NodeLabels 0.08
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
110 TestFunctional/parallel/License 0.49
111 TestFunctional/parallel/Version/short 0.05
112 TestFunctional/parallel/Version/components 0.68
113 TestFunctional/parallel/ImageCommands/ImageListShort 1.09
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
117 TestFunctional/parallel/ImageCommands/ImageBuild 8.47
118 TestFunctional/parallel/ImageCommands/Setup 1.85
128 TestFunctional/parallel/ServiceCmd/DeployApp 11.17
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.32
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.96
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.72
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.52
133 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.68
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.15
136 TestFunctional/parallel/ServiceCmd/List 0.31
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.58
139 TestFunctional/parallel/ServiceCmd/Format 0.37
140 TestFunctional/parallel/ServiceCmd/URL 0.4
141 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
142 TestFunctional/parallel/MountCmd/any-port 8.81
143 TestFunctional/parallel/ProfileCmd/profile_list 0.34
144 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
148 TestFunctional/parallel/MountCmd/specific-port 1.85
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.62
150 TestFunctional/delete_echo-server_images 0.04
151 TestFunctional/delete_my-image_image 0.02
152 TestFunctional/delete_minikube_cached_images 0.02
156 TestMultiControlPlane/serial/StartCluster 201.33
157 TestMultiControlPlane/serial/DeployApp 6.88
158 TestMultiControlPlane/serial/PingHostFromPods 1.3
159 TestMultiControlPlane/serial/AddWorkerNode 61.41
160 TestMultiControlPlane/serial/NodeLabels 0.07
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.87
162 TestMultiControlPlane/serial/CopyFile 13.36
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 4.03
168 TestMultiControlPlane/serial/DeleteSecondaryNode 16.73
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.65
171 TestMultiControlPlane/serial/RestartCluster 324.77
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
173 TestMultiControlPlane/serial/AddSecondaryNode 80.07
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.88
178 TestJSONOutput/start/Command 53.8
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.7
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.64
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 23.36
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.21
206 TestMainNoArgs 0.05
207 TestMinikubeProfile 92.16
210 TestMountStart/serial/StartWithMountFirst 30.77
211 TestMountStart/serial/VerifyMountFirst 0.39
212 TestMountStart/serial/StartWithMountSecond 24.24
213 TestMountStart/serial/VerifyMountSecond 0.38
214 TestMountStart/serial/DeleteFirst 0.89
215 TestMountStart/serial/VerifyMountPostDelete 0.38
216 TestMountStart/serial/Stop 1.28
217 TestMountStart/serial/RestartStopped 22.58
218 TestMountStart/serial/VerifyMountPostStop 0.38
221 TestMultiNode/serial/FreshStart2Nodes 110.94
222 TestMultiNode/serial/DeployApp2Nodes 6.18
223 TestMultiNode/serial/PingHostFrom2Pods 0.87
224 TestMultiNode/serial/AddNode 51.9
225 TestMultiNode/serial/MultiNodeLabels 0.07
226 TestMultiNode/serial/ProfileList 0.61
227 TestMultiNode/serial/CopyFile 7.59
228 TestMultiNode/serial/StopNode 2.44
229 TestMultiNode/serial/StartAfterStop 40.52
231 TestMultiNode/serial/DeleteNode 2.34
233 TestMultiNode/serial/RestartMultiNode 189.05
234 TestMultiNode/serial/ValidateNameConflict 44.94
241 TestScheduledStopUnix 111.7
245 TestRunningBinaryUpgrade 199.49
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
251 TestNoKubernetes/serial/StartWithK8s 99.1
252 TestStoppedBinaryUpgrade/Setup 2.23
253 TestStoppedBinaryUpgrade/Upgrade 130.23
254 TestNoKubernetes/serial/StartWithStopK8s 40.97
255 TestNoKubernetes/serial/Start 28.3
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
257 TestNoKubernetes/serial/ProfileList 24.94
258 TestNoKubernetes/serial/Stop 3.17
259 TestNoKubernetes/serial/StartNoArgs 23.48
260 TestStoppedBinaryUpgrade/MinikubeLogs 0.87
268 TestNetworkPlugins/group/false 3.2
272 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
281 TestPause/serial/Start 109.83
282 TestPause/serial/SecondStartNoReconfiguration 46.74
283 TestPause/serial/Pause 0.8
284 TestPause/serial/VerifyStatus 0.28
285 TestPause/serial/Unpause 0.89
286 TestPause/serial/PauseAgain 1.23
287 TestPause/serial/DeletePaused 1.5
288 TestPause/serial/VerifyDeletedResources 0.69
289 TestNetworkPlugins/group/auto/Start 87.83
290 TestNetworkPlugins/group/enable-default-cni/Start 69.07
291 TestNetworkPlugins/group/kindnet/Start 84.22
292 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
293 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.38
294 TestNetworkPlugins/group/auto/KubeletFlags 0.22
295 TestNetworkPlugins/group/auto/NetCatPod 12.31
296 TestNetworkPlugins/group/enable-default-cni/DNS 21.17
297 TestNetworkPlugins/group/auto/DNS 0.17
298 TestNetworkPlugins/group/auto/Localhost 0.14
299 TestNetworkPlugins/group/auto/HairPin 0.13
300 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
301 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
302 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
303 TestNetworkPlugins/group/flannel/Start 77.99
304 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
305 TestNetworkPlugins/group/kindnet/NetCatPod 12.25
306 TestNetworkPlugins/group/kindnet/DNS 0.21
307 TestNetworkPlugins/group/kindnet/Localhost 0.14
308 TestNetworkPlugins/group/kindnet/HairPin 0.18
309 TestNetworkPlugins/group/calico/Start 99.98
310 TestNetworkPlugins/group/custom-flannel/Start 109.27
311 TestNetworkPlugins/group/bridge/Start 144.75
312 TestNetworkPlugins/group/flannel/ControllerPod 6.01
313 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
314 TestNetworkPlugins/group/flannel/NetCatPod 11.24
315 TestNetworkPlugins/group/flannel/DNS 0.2
316 TestNetworkPlugins/group/flannel/Localhost 0.17
317 TestNetworkPlugins/group/flannel/HairPin 0.18
320 TestNetworkPlugins/group/calico/ControllerPod 6.01
321 TestNetworkPlugins/group/calico/KubeletFlags 0.32
322 TestNetworkPlugins/group/calico/NetCatPod 13.57
323 TestNetworkPlugins/group/calico/DNS 0.17
324 TestNetworkPlugins/group/calico/Localhost 0.15
325 TestNetworkPlugins/group/calico/HairPin 0.15
326 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
327 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.26
328 TestNetworkPlugins/group/custom-flannel/DNS 0.23
329 TestNetworkPlugins/group/custom-flannel/Localhost 0.28
330 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
335 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
336 TestNetworkPlugins/group/bridge/NetCatPod 11.23
337 TestNetworkPlugins/group/bridge/DNS 0.18
338 TestNetworkPlugins/group/bridge/Localhost 0.14
339 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (24.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-832165 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-832165 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (24.005250333s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (24.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0923 12:28:10.608553  669447 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0923 12:28:10.608671  669447 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-832165
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-832165: exit status 85 (65.847126ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-832165 | jenkins | v1.34.0 | 23 Sep 24 12:27 UTC |          |
	|         | -p download-only-832165        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 12:27:46
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 12:27:46.645312  669459 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:27:46.645469  669459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:27:46.645483  669459 out.go:358] Setting ErrFile to fd 2...
	I0923 12:27:46.645491  669459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:27:46.645707  669459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-662205/.minikube/bin
	W0923 12:27:46.645858  669459 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19690-662205/.minikube/config/config.json: open /home/jenkins/minikube-integration/19690-662205/.minikube/config/config.json: no such file or directory
	I0923 12:27:46.646474  669459 out.go:352] Setting JSON to true
	I0923 12:27:46.647609  669459 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7810,"bootTime":1727086657,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 12:27:46.647744  669459 start.go:139] virtualization: kvm guest
	I0923 12:27:46.650495  669459 out.go:97] [download-only-832165] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 12:27:46.650684  669459 notify.go:220] Checking for updates...
	W0923 12:27:46.650711  669459 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 12:27:46.652339  669459 out.go:169] MINIKUBE_LOCATION=19690
	I0923 12:27:46.653979  669459 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:27:46.655482  669459 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 12:27:46.656540  669459 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:27:46.657748  669459 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0923 12:27:46.659990  669459 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 12:27:46.660313  669459 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:27:46.694017  669459 out.go:97] Using the kvm2 driver based on user configuration
	I0923 12:27:46.694062  669459 start.go:297] selected driver: kvm2
	I0923 12:27:46.694069  669459 start.go:901] validating driver "kvm2" against <nil>
	I0923 12:27:46.694441  669459 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:27:46.694549  669459 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19690-662205/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 12:27:46.710947  669459 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 12:27:46.711029  669459 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 12:27:46.711618  669459 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0923 12:27:46.711810  669459 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 12:27:46.711850  669459 cni.go:84] Creating CNI manager for ""
	I0923 12:27:46.711924  669459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 12:27:46.711935  669459 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 12:27:46.712016  669459 start.go:340] cluster config:
	{Name:download-only-832165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-832165 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:27:46.712246  669459 iso.go:125] acquiring lock: {Name:mkb968a95eae3838cd5c328cf3385c2ef4ff2c8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:27:46.714290  669459 out.go:97] Downloading VM boot image ...
	I0923 12:27:46.714355  669459 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19690-662205/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 12:27:55.989732  669459 out.go:97] Starting "download-only-832165" primary control-plane node in "download-only-832165" cluster
	I0923 12:27:55.989769  669459 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0923 12:27:56.089350  669459 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0923 12:27:56.089407  669459 cache.go:56] Caching tarball of preloaded images
	I0923 12:27:56.089579  669459 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0923 12:27:56.091591  669459 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0923 12:27:56.091612  669459 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0923 12:27:56.237585  669459 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-832165 host does not exist
	  To start a cluster, run: "minikube start -p download-only-832165"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-832165
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (12.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-473947 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-473947 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.518139854s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (12.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0923 12:28:23.486980  669447 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I0923 12:28:23.487021  669447 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-473947
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-473947: exit status 85 (62.524896ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-832165 | jenkins | v1.34.0 | 23 Sep 24 12:27 UTC |                     |
	|         | -p download-only-832165        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC | 23 Sep 24 12:28 UTC |
	| delete  | -p download-only-832165        | download-only-832165 | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC | 23 Sep 24 12:28 UTC |
	| start   | -o=json --download-only        | download-only-473947 | jenkins | v1.34.0 | 23 Sep 24 12:28 UTC |                     |
	|         | -p download-only-473947        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 12:28:11
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 12:28:11.012954  669718 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:28:11.013217  669718 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:28:11.013226  669718 out.go:358] Setting ErrFile to fd 2...
	I0923 12:28:11.013230  669718 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:28:11.013416  669718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-662205/.minikube/bin
	I0923 12:28:11.014050  669718 out.go:352] Setting JSON to true
	I0923 12:28:11.015124  669718 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7834,"bootTime":1727086657,"procs":321,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 12:28:11.015242  669718 start.go:139] virtualization: kvm guest
	I0923 12:28:11.017506  669718 out.go:97] [download-only-473947] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 12:28:11.017747  669718 notify.go:220] Checking for updates...
	I0923 12:28:11.018882  669718 out.go:169] MINIKUBE_LOCATION=19690
	I0923 12:28:11.020871  669718 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:28:11.022435  669718 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 12:28:11.023936  669718 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:28:11.025601  669718 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0923 12:28:11.028451  669718 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 12:28:11.028713  669718 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:28:11.062958  669718 out.go:97] Using the kvm2 driver based on user configuration
	I0923 12:28:11.063004  669718 start.go:297] selected driver: kvm2
	I0923 12:28:11.063012  669718 start.go:901] validating driver "kvm2" against <nil>
	I0923 12:28:11.063404  669718 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:28:11.063570  669718 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19690-662205/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 12:28:11.080578  669718 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 12:28:11.080658  669718 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 12:28:11.081222  669718 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0923 12:28:11.081427  669718 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 12:28:11.081457  669718 cni.go:84] Creating CNI manager for ""
	I0923 12:28:11.081507  669718 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 12:28:11.081516  669718 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 12:28:11.081582  669718 start.go:340] cluster config:
	{Name:download-only-473947 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-473947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:28:11.081677  669718 iso.go:125] acquiring lock: {Name:mkb968a95eae3838cd5c328cf3385c2ef4ff2c8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:28:11.083630  669718 out.go:97] Starting "download-only-473947" primary control-plane node in "download-only-473947" cluster
	I0923 12:28:11.083658  669718 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 12:28:11.184561  669718 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 12:28:11.184618  669718 cache.go:56] Caching tarball of preloaded images
	I0923 12:28:11.184814  669718 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 12:28:11.186932  669718 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0923 12:28:11.186974  669718 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0923 12:28:11.290474  669718 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19690-662205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-473947 host does not exist
	  To start a cluster, run: "minikube start -p download-only-473947"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-473947
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I0923 12:28:24.102090  669447 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-529103 --alsologtostderr --binary-mirror http://127.0.0.1:35373 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-529103" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-529103
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (78.61s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-476479 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-476479 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m17.619895971s)
helpers_test.go:175: Cleaning up "offline-crio-476479" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-476479
--- PASS: TestOffline (78.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-052630
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-052630: exit status 85 (56.272401ms)

                                                
                                                
-- stdout --
	* Profile "addons-052630" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-052630"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-052630
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-052630: exit status 85 (57.115181ms)

                                                
                                                
-- stdout --
	* Profile "addons-052630" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-052630"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (131.71s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-052630 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-052630 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m11.707597434s)
--- PASS: TestAddons/Setup (131.71s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-052630 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-052630 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.99s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ht49x" [d06bf7d7-1e83-4fd0-ba06-2402f99a3e50] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004485113s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-052630
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-052630: (5.985875329s)
--- PASS: TestAddons/parallel/InspektorGadget (11.99s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:505: csi-hostpath-driver pods stabilized in 6.588858ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-052630 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-052630 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [85d4c188-1a46-42d4-816d-3574bafe50eb] Pending
helpers_test.go:344: "task-pv-pod" [85d4c188-1a46-42d4-816d-3574bafe50eb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [85d4c188-1a46-42d4-816d-3574bafe50eb] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.00469534s
addons_test.go:528: (dbg) Run:  kubectl --context addons-052630 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-052630 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-052630 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-052630 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-052630 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-052630 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-052630 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [90520416-96ff-4d24-9581-e3a3cf71b3b2] Pending
helpers_test.go:344: "task-pv-pod-restore" [90520416-96ff-4d24-9581-e3a3cf71b3b2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [90520416-96ff-4d24-9581-e3a3cf71b3b2] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004808677s
addons_test.go:570: (dbg) Run:  kubectl --context addons-052630 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-052630 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-052630 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p addons-052630 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p addons-052630 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.850773037s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p addons-052630 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (53.68s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-052630 --alsologtostderr -v=1
I0923 12:38:39.512255  669447 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-prfcb" [0dd56b02-22af-4f87-9272-1d54727a6693] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-prfcb" [0dd56b02-22af-4f87-9272-1d54727a6693] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-prfcb" [0dd56b02-22af-4f87-9272-1d54727a6693] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.005878862s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-052630 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p addons-052630 addons disable headlamp --alsologtostderr -v=1: (6.095514735s)
--- PASS: TestAddons/parallel/Headlamp (19.03s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-2tf2f" [7bfaa6dc-7b6d-496c-8757-ef15d11690c4] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004035558s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-052630
--- PASS: TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.14s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-052630 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-052630 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052630 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [536d721a-6a0b-44c4-b6b1-62d86dac2f3b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [536d721a-6a0b-44c4-b6b1-62d86dac2f3b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [536d721a-6a0b-44c4-b6b1-62d86dac2f3b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004523683s
addons_test.go:938: (dbg) Run:  kubectl --context addons-052630 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-052630 ssh "cat /opt/local-path-provisioner/pvc-5738aee6-f638-4bad-bf82-f8a96b05fb86_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-052630 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-052630 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-052630 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.14s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fhnrr" [8455a016-6ce8-40d4-bd64-ec3d2e30f774] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004879622s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-052630
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-25glb" [87a5a852-ba29-4ec8-ae1e-81409436caa6] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005057546s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-052630 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-052630 addons disable yakd --alsologtostderr -v=1: (5.742267665s)
--- PASS: TestAddons/parallel/Yakd (10.75s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.74s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-052630
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-052630: (1m32.44066412s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-052630
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-052630
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-052630
--- PASS: TestAddons/StoppedEnableDisable (92.74s)

                                                
                                    
x
+
TestCertOptions (55.7s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-049900 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0923 13:50:12.251940  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-049900 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (54.163500402s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-049900 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-049900 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-049900 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-049900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-049900
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-049900: (1.031884664s)
--- PASS: TestCertOptions (55.70s)

                                                
                                    
x
+
TestCertExpiration (302.1s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-861603 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-861603 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m28.27951264s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-861603 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-861603 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (32.752035738s)
helpers_test.go:175: Cleaning up "cert-expiration-861603" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-861603
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-861603: (1.070892982s)
--- PASS: TestCertExpiration (302.10s)

                                                
                                    
x
+
TestForceSystemdFlag (73.87s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-354291 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-354291 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m12.646874259s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-354291 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-354291" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-354291
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-354291: (1.023799493s)
--- PASS: TestForceSystemdFlag (73.87s)

                                                
                                    
x
+
TestForceSystemdEnv (68.26s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-640763 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-640763 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m7.221090844s)
helpers_test.go:175: Cleaning up "force-systemd-env-640763" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-640763
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-640763: (1.038250278s)
--- PASS: TestForceSystemdEnv (68.26s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (6.16s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0923 13:48:05.580712  669447 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0923 13:48:05.580883  669447 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0923 13:48:05.618059  669447 install.go:62] docker-machine-driver-kvm2: exit status 1
W0923 13:48:05.618518  669447 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0923 13:48:05.618590  669447 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1309809378/001/docker-machine-driver-kvm2
I0923 13:48:05.844297  669447 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1309809378/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc000711760 gz:0xc000711768 tar:0xc000711710 tar.bz2:0xc000711720 tar.gz:0xc000711730 tar.xz:0xc000711740 tar.zst:0xc000711750 tbz2:0xc000711720 tgz:0xc000711730 txz:0xc000711740 tzst:0xc000711750 xz:0xc000711770 zip:0xc000711780 zst:0xc000711778] Getters:map[file:0xc000afa650 http:0xc00055d0e0 https:0xc00055d130] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0923 13:48:05.844373  669447 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1309809378/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (6.16s)

                                                
                                    
x
+
TestErrorSpam/setup (42.69s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-993701 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-993701 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-993701 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-993701 --driver=kvm2  --container-runtime=crio: (42.689935938s)
--- PASS: TestErrorSpam/setup (42.69s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-993701 --log_dir /tmp/nospam-993701 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-993701 --log_dir /tmp/nospam-993701 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-993701 --log_dir /tmp/nospam-993701 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-993701 --log_dir /tmp/nospam-993701 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-993701 --log_dir /tmp/nospam-993701 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-993701 --log_dir /tmp/nospam-993701 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-993701 --log_dir /tmp/nospam-993701 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-993701 --log_dir /tmp/nospam-993701 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-993701 --log_dir /tmp/nospam-993701 pause
--- PASS: TestErrorSpam/pause (1.59s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-993701 --log_dir /tmp/nospam-993701 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-993701 --log_dir /tmp/nospam-993701 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-993701 --log_dir /tmp/nospam-993701 unpause
--- PASS: TestErrorSpam/unpause (1.76s)

                                                
                                    
x
+
TestErrorSpam/stop (5.63s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-993701 --log_dir /tmp/nospam-993701 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-993701 --log_dir /tmp/nospam-993701 stop: (2.327984766s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-993701 --log_dir /tmp/nospam-993701 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-993701 --log_dir /tmp/nospam-993701 stop: (1.259579893s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-993701 --log_dir /tmp/nospam-993701 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-993701 --log_dir /tmp/nospam-993701 stop: (2.039850421s)
--- PASS: TestErrorSpam/stop (5.63s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19690-662205/.minikube/files/etc/test/nested/copy/669447/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (57.78s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-741768 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-741768 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (57.783041324s)
--- PASS: TestFunctional/serial/StartWithProxy (57.78s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (49.43s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0923 12:48:00.039432  669447 config.go:182] Loaded profile config "functional-741768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-741768 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-741768 --alsologtostderr -v=8: (49.429041862s)
functional_test.go:663: soft start took 49.430040174s for "functional-741768" cluster.
I0923 12:48:49.468903  669447 config.go:182] Loaded profile config "functional-741768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (49.43s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-741768 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-741768 cache add registry.k8s.io/pause:3.1: (1.335414506s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-741768 cache add registry.k8s.io/pause:3.3: (1.451876591s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-741768 cache add registry.k8s.io/pause:latest: (1.287472751s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-741768 /tmp/TestFunctionalserialCacheCmdcacheadd_local1808200763/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 cache add minikube-local-cache-test:functional-741768
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-741768 cache add minikube-local-cache-test:functional-741768: (1.795080042s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 cache delete minikube-local-cache-test:functional-741768
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-741768
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-741768 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (218.136984ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-741768 cache reload: (1.087201433s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 kubectl -- --context functional-741768 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-741768 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (383.54s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-741768 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0923 12:50:36.850271  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:50:36.856835  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:50:36.868321  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:50:36.889857  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:50:36.931288  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:50:37.012838  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:50:37.174476  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:50:37.496821  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:50:38.138963  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:50:39.420652  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:50:41.982040  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:50:47.103468  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:50:57.345514  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:51:17.827074  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:51:58.790435  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:53:20.715407  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-741768 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (6m23.536166804s)
functional_test.go:761: restart took 6m23.53632946s for "functional-741768" cluster.
I0923 12:55:21.798394  669447 config.go:182] Loaded profile config "functional-741768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (383.54s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-741768 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-741768 logs: (1.226425409s)
--- PASS: TestFunctional/serial/LogsCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 logs --file /tmp/TestFunctionalserialLogsFileCmd956951063/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-741768 logs --file /tmp/TestFunctionalserialLogsFileCmd956951063/001/logs.txt: (1.221333203s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.22s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.37s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-741768 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-741768
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-741768: exit status 115 (298.093264ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.190:30305 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-741768 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.37s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-741768 config get cpus: exit status 14 (58.63241ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-741768 config get cpus: exit status 14 (58.237453ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (27.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-741768 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-741768 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 681347: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (27.07s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-741768 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-741768 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (157.70989ms)

                                                
                                                
-- stdout --
	* [functional-741768] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-662205/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-662205/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:55:43.138785  680813 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:55:43.139038  680813 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:55:43.139046  680813 out.go:358] Setting ErrFile to fd 2...
	I0923 12:55:43.139051  680813 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:55:43.139226  680813 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-662205/.minikube/bin
	I0923 12:55:43.139808  680813 out.go:352] Setting JSON to false
	I0923 12:55:43.140996  680813 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9486,"bootTime":1727086657,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 12:55:43.141121  680813 start.go:139] virtualization: kvm guest
	I0923 12:55:43.143544  680813 out.go:177] * [functional-741768] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 12:55:43.145364  680813 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 12:55:43.145423  680813 notify.go:220] Checking for updates...
	I0923 12:55:43.148395  680813 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:55:43.149864  680813 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 12:55:43.151373  680813 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:55:43.152875  680813 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 12:55:43.154107  680813 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 12:55:43.155782  680813 config.go:182] Loaded profile config "functional-741768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:55:43.156351  680813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:55:43.156441  680813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:55:43.172295  680813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43909
	I0923 12:55:43.172828  680813 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:55:43.173411  680813 main.go:141] libmachine: Using API Version  1
	I0923 12:55:43.173432  680813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:55:43.173752  680813 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:55:43.173983  680813 main.go:141] libmachine: (functional-741768) Calling .DriverName
	I0923 12:55:43.174256  680813 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:55:43.174553  680813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:55:43.174592  680813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:55:43.190657  680813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37145
	I0923 12:55:43.191124  680813 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:55:43.191767  680813 main.go:141] libmachine: Using API Version  1
	I0923 12:55:43.191800  680813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:55:43.192229  680813 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:55:43.192429  680813 main.go:141] libmachine: (functional-741768) Calling .DriverName
	I0923 12:55:43.237103  680813 out.go:177] * Using the kvm2 driver based on existing profile
	I0923 12:55:43.238684  680813 start.go:297] selected driver: kvm2
	I0923 12:55:43.238706  680813 start.go:901] validating driver "kvm2" against &{Name:functional-741768 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-741768 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:55:43.238838  680813 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 12:55:43.241870  680813 out.go:201] 
	W0923 12:55:43.243709  680813 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0923 12:55:43.245293  680813 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-741768 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-741768 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-741768 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (148.618801ms)

                                                
                                                
-- stdout --
	* [functional-741768] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-662205/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-662205/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:55:43.445568  680905 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:55:43.445683  680905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:55:43.445689  680905 out.go:358] Setting ErrFile to fd 2...
	I0923 12:55:43.445693  680905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:55:43.446016  680905 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-662205/.minikube/bin
	I0923 12:55:43.446599  680905 out.go:352] Setting JSON to false
	I0923 12:55:43.447626  680905 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9486,"bootTime":1727086657,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 12:55:43.447745  680905 start.go:139] virtualization: kvm guest
	I0923 12:55:43.450228  680905 out.go:177] * [functional-741768] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0923 12:55:43.451891  680905 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 12:55:43.451928  680905 notify.go:220] Checking for updates...
	I0923 12:55:43.454492  680905 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:55:43.456270  680905 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 12:55:43.457870  680905 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 12:55:43.459513  680905 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 12:55:43.460961  680905 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 12:55:43.462586  680905 config.go:182] Loaded profile config "functional-741768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 12:55:43.462996  680905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:55:43.463082  680905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:55:43.478961  680905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39147
	I0923 12:55:43.479450  680905 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:55:43.480073  680905 main.go:141] libmachine: Using API Version  1
	I0923 12:55:43.480095  680905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:55:43.480498  680905 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:55:43.480708  680905 main.go:141] libmachine: (functional-741768) Calling .DriverName
	I0923 12:55:43.480957  680905 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:55:43.481283  680905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 12:55:43.481329  680905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:55:43.498415  680905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46251
	I0923 12:55:43.498905  680905 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:55:43.499571  680905 main.go:141] libmachine: Using API Version  1
	I0923 12:55:43.499597  680905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:55:43.500002  680905 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:55:43.500205  680905 main.go:141] libmachine: (functional-741768) Calling .DriverName
	I0923 12:55:43.537052  680905 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0923 12:55:43.538611  680905 start.go:297] selected driver: kvm2
	I0923 12:55:43.538634  680905 start.go:901] validating driver "kvm2" against &{Name:functional-741768 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-741768 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:55:43.538882  680905 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 12:55:43.541213  680905 out.go:201] 
	W0923 12:55:43.543132  680905 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0923 12:55:43.544765  680905 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-741768 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-741768 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-6qwvc" [0f31a685-f321-4751-8933-b5cb6a81b9c2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-6qwvc" [0f31a685-f321-4751-8933-b5cb6a81b9c2] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004159024s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.190:32177
functional_test.go:1675: http://192.168.39.190:32177: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-6qwvc

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.190:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.190:32177
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.89s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (51.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [498bf9d0-1c9f-43ff-b846-c841b7ae3013] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00527668s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-741768 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-741768 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-741768 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-741768 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [62ccac6e-eb07-4859-b65f-6000756f421a] Pending
helpers_test.go:344: "sp-pod" [62ccac6e-eb07-4859-b65f-6000756f421a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [62ccac6e-eb07-4859-b65f-6000756f421a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004372762s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-741768 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-741768 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-741768 delete -f testdata/storage-provisioner/pod.yaml: (1.484792281s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-741768 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0cb42748-5a81-47f0-baa8-761b22d4cd47] Pending
helpers_test.go:344: "sp-pod" [0cb42748-5a81-47f0-baa8-761b22d4cd47] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0cb42748-5a81-47f0-baa8-761b22d4cd47] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 29.00447181s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-741768 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (51.29s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh -n functional-741768 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 cp functional-741768:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3515179692/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh -n functional-741768 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh -n functional-741768 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-741768 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-9j4nh" [cc80a4c7-5938-4e8b-a936-107d02cb66a3] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-9j4nh" [cc80a4c7-5938-4e8b-a936-107d02cb66a3] Running
E0923 12:56:04.557370  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.005761193s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-741768 exec mysql-6cdb49bbb-9j4nh -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-741768 exec mysql-6cdb49bbb-9j4nh -- mysql -ppassword -e "show databases;": exit status 1 (280.349498ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 12:56:07.418908  669447 retry.go:31] will retry after 1.387068591s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-741768 exec mysql-6cdb49bbb-9j4nh -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-741768 exec mysql-6cdb49bbb-9j4nh -- mysql -ppassword -e "show databases;": exit status 1 (144.230629ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 12:56:08.951292  669447 retry.go:31] will retry after 1.122196933s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-741768 exec mysql-6cdb49bbb-9j4nh -- mysql -ppassword -e "show databases;"
2024/09/23 12:56:10 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (25.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/669447/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh "sudo cat /etc/test/nested/copy/669447/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/669447.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh "sudo cat /etc/ssl/certs/669447.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/669447.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh "sudo cat /usr/share/ca-certificates/669447.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/6694472.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh "sudo cat /etc/ssl/certs/6694472.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/6694472.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh "sudo cat /usr/share/ca-certificates/6694472.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-741768 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-741768 ssh "sudo systemctl is-active docker": exit status 1 (232.394032ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-741768 ssh "sudo systemctl is-active containerd": exit status 1 (229.763029ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 image ls --format short --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p functional-741768 image ls --format short --alsologtostderr: (1.092853952s)
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-741768 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-741768
localhost/kicbase/echo-server:functional-741768
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-741768 image ls --format short --alsologtostderr:
I0923 12:55:55.866019  681892 out.go:345] Setting OutFile to fd 1 ...
I0923 12:55:55.866137  681892 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:55:55.866146  681892 out.go:358] Setting ErrFile to fd 2...
I0923 12:55:55.866150  681892 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:55:55.866389  681892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-662205/.minikube/bin
I0923 12:55:55.867100  681892 config.go:182] Loaded profile config "functional-741768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 12:55:55.867218  681892 config.go:182] Loaded profile config "functional-741768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 12:55:55.867644  681892 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 12:55:55.867715  681892 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 12:55:55.884635  681892 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39059
I0923 12:55:55.885335  681892 main.go:141] libmachine: () Calling .GetVersion
I0923 12:55:55.886024  681892 main.go:141] libmachine: Using API Version  1
I0923 12:55:55.886059  681892 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 12:55:55.886451  681892 main.go:141] libmachine: () Calling .GetMachineName
I0923 12:55:55.886668  681892 main.go:141] libmachine: (functional-741768) Calling .GetState
I0923 12:55:55.888824  681892 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 12:55:55.888871  681892 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 12:55:55.904616  681892 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43361
I0923 12:55:55.905222  681892 main.go:141] libmachine: () Calling .GetVersion
I0923 12:55:55.905866  681892 main.go:141] libmachine: Using API Version  1
I0923 12:55:55.905897  681892 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 12:55:55.906235  681892 main.go:141] libmachine: () Calling .GetMachineName
I0923 12:55:55.906464  681892 main.go:141] libmachine: (functional-741768) Calling .DriverName
I0923 12:55:55.906737  681892 ssh_runner.go:195] Run: systemctl --version
I0923 12:55:55.906765  681892 main.go:141] libmachine: (functional-741768) Calling .GetSSHHostname
I0923 12:55:55.909972  681892 main.go:141] libmachine: (functional-741768) DBG | domain functional-741768 has defined MAC address 52:54:00:f2:3d:95 in network mk-functional-741768
I0923 12:55:55.910410  681892 main.go:141] libmachine: (functional-741768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:3d:95", ip: ""} in network mk-functional-741768: {Iface:virbr1 ExpiryTime:2024-09-23 13:47:16 +0000 UTC Type:0 Mac:52:54:00:f2:3d:95 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:functional-741768 Clientid:01:52:54:00:f2:3d:95}
I0923 12:55:55.910457  681892 main.go:141] libmachine: (functional-741768) DBG | domain functional-741768 has defined IP address 192.168.39.190 and MAC address 52:54:00:f2:3d:95 in network mk-functional-741768
I0923 12:55:55.910600  681892 main.go:141] libmachine: (functional-741768) Calling .GetSSHPort
I0923 12:55:55.910784  681892 main.go:141] libmachine: (functional-741768) Calling .GetSSHKeyPath
I0923 12:55:55.910979  681892 main.go:141] libmachine: (functional-741768) Calling .GetSSHUsername
I0923 12:55:55.911130  681892 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/functional-741768/id_rsa Username:docker}
I0923 12:55:56.038503  681892 ssh_runner.go:195] Run: sudo crictl images --output json
I0923 12:55:56.903705  681892 main.go:141] libmachine: Making call to close driver server
I0923 12:55:56.903723  681892 main.go:141] libmachine: (functional-741768) Calling .Close
I0923 12:55:56.904049  681892 main.go:141] libmachine: Successfully made call to close driver server
I0923 12:55:56.904072  681892 main.go:141] libmachine: Making call to close connection to plugin binary
I0923 12:55:56.904093  681892 main.go:141] libmachine: Making call to close driver server
I0923 12:55:56.904160  681892 main.go:141] libmachine: (functional-741768) DBG | Closing plugin on server side
I0923 12:55:56.904289  681892 main.go:141] libmachine: (functional-741768) Calling .Close
I0923 12:55:56.904571  681892 main.go:141] libmachine: Successfully made call to close driver server
I0923 12:55:56.904583  681892 main.go:141] libmachine: Making call to close connection to plugin binary
I0923 12:55:56.904610  681892 main.go:141] libmachine: (functional-741768) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-741768 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | 39286ab8a5e14 | 192MB  |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/kicbase/echo-server           | functional-741768  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-741768  | 0761f00aca23d | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/my-image                      | functional-741768  | df7c4ec26d36f | 1.47MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-741768 image ls --format table --alsologtostderr:
I0923 12:56:06.058037  682071 out.go:345] Setting OutFile to fd 1 ...
I0923 12:56:06.058291  682071 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:56:06.058299  682071 out.go:358] Setting ErrFile to fd 2...
I0923 12:56:06.058303  682071 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:56:06.058484  682071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-662205/.minikube/bin
I0923 12:56:06.059099  682071 config.go:182] Loaded profile config "functional-741768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 12:56:06.059199  682071 config.go:182] Loaded profile config "functional-741768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 12:56:06.059582  682071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 12:56:06.059630  682071 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 12:56:06.075796  682071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45319
I0923 12:56:06.076415  682071 main.go:141] libmachine: () Calling .GetVersion
I0923 12:56:06.077091  682071 main.go:141] libmachine: Using API Version  1
I0923 12:56:06.077120  682071 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 12:56:06.077568  682071 main.go:141] libmachine: () Calling .GetMachineName
I0923 12:56:06.077794  682071 main.go:141] libmachine: (functional-741768) Calling .GetState
I0923 12:56:06.084108  682071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 12:56:06.084171  682071 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 12:56:06.099931  682071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34599
I0923 12:56:06.100451  682071 main.go:141] libmachine: () Calling .GetVersion
I0923 12:56:06.101017  682071 main.go:141] libmachine: Using API Version  1
I0923 12:56:06.101047  682071 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 12:56:06.101382  682071 main.go:141] libmachine: () Calling .GetMachineName
I0923 12:56:06.101563  682071 main.go:141] libmachine: (functional-741768) Calling .DriverName
I0923 12:56:06.101768  682071 ssh_runner.go:195] Run: systemctl --version
I0923 12:56:06.101808  682071 main.go:141] libmachine: (functional-741768) Calling .GetSSHHostname
I0923 12:56:06.105136  682071 main.go:141] libmachine: (functional-741768) DBG | domain functional-741768 has defined MAC address 52:54:00:f2:3d:95 in network mk-functional-741768
I0923 12:56:06.105560  682071 main.go:141] libmachine: (functional-741768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:3d:95", ip: ""} in network mk-functional-741768: {Iface:virbr1 ExpiryTime:2024-09-23 13:47:16 +0000 UTC Type:0 Mac:52:54:00:f2:3d:95 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:functional-741768 Clientid:01:52:54:00:f2:3d:95}
I0923 12:56:06.105604  682071 main.go:141] libmachine: (functional-741768) DBG | domain functional-741768 has defined IP address 192.168.39.190 and MAC address 52:54:00:f2:3d:95 in network mk-functional-741768
I0923 12:56:06.105711  682071 main.go:141] libmachine: (functional-741768) Calling .GetSSHPort
I0923 12:56:06.105911  682071 main.go:141] libmachine: (functional-741768) Calling .GetSSHKeyPath
I0923 12:56:06.106104  682071 main.go:141] libmachine: (functional-741768) Calling .GetSSHUsername
I0923 12:56:06.106273  682071 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/functional-741768/id_rsa Username:docker}
I0923 12:56:06.226909  682071 ssh_runner.go:195] Run: sudo crictl images --output json
I0923 12:56:06.312316  682071 main.go:141] libmachine: Making call to close driver server
I0923 12:56:06.312351  682071 main.go:141] libmachine: (functional-741768) Calling .Close
I0923 12:56:06.312669  682071 main.go:141] libmachine: Successfully made call to close driver server
I0923 12:56:06.312689  682071 main.go:141] libmachine: Making call to close connection to plugin binary
I0923 12:56:06.312698  682071 main.go:141] libmachine: Making call to close driver server
I0923 12:56:06.312706  682071 main.go:141] libmachine: (functional-741768) Calling .Close
I0923 12:56:06.312719  682071 main.go:141] libmachine: (functional-741768) DBG | Closing plugin on server side
I0923 12:56:06.312957  682071 main.go:141] libmachine: (functional-741768) DBG | Closing plugin on server side
I0923 12:56:06.312993  682071 main.go:141] libmachine: Successfully made call to close driver server
I0923 12:56:06.313006  682071 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-741768 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","do
cker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"df7c4ec26d36f15e32d6d90b48355ad987cd6fdd8560e01510cafa9bae3e880d","repoDigests":["localhost/my-image@sha256:689cfa6da47007970b4981d52fcd8a88ac7fa4bac564ba2c747a5cf448684173"],"repoTags":["localhost/my-image:functional-741768"],"size":"1468600"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97
846543"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76
e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3","docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e"],"repoTags":["docker.io/library/nginx:latest"],"size":"191853369"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-741768"],"size":"4943877"},{"id":"0761f00aca23d15a5068d31a0fc45bdae8
bd567221d1c351b83e1362ccf4ef88","repoDigests":["localhost/minikube-local-cache-test@sha256:2fb7a2c4aacb0b7289889c9849d41ec8396418a596b71b848a31c77ecece9066"],"repoTags":["localhost/minikube-local-cache-test:functional-741768"],"size":"3330"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"12968670680f4561ef6818782391
eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"0981a1c8f9eb5ac691dd3bf616730e694c6ff33605f1bb879050fa35e9582895","repoDigests":["docker.io/library/e47e58323bfd6af2e05599828a750edeb5be349cd23b9b07c0c07bb84a5bae2c-tmp@sha256:8186ff39bc5d99aeaf4c5f13fb8065b7a3c04135c7e0fac38552cc62e857c8d4"],"repoTags":[],"size":"1466018"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135
b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-741768 image ls --format json --alsologtostderr:
I0923 12:56:05.733602  682047 out.go:345] Setting OutFile to fd 1 ...
I0923 12:56:05.733777  682047 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:56:05.733788  682047 out.go:358] Setting ErrFile to fd 2...
I0923 12:56:05.733795  682047 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:56:05.734014  682047 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-662205/.minikube/bin
I0923 12:56:05.734702  682047 config.go:182] Loaded profile config "functional-741768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 12:56:05.734836  682047 config.go:182] Loaded profile config "functional-741768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 12:56:05.735262  682047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 12:56:05.735326  682047 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 12:56:05.751790  682047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46681
I0923 12:56:05.752447  682047 main.go:141] libmachine: () Calling .GetVersion
I0923 12:56:05.753111  682047 main.go:141] libmachine: Using API Version  1
I0923 12:56:05.753136  682047 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 12:56:05.753547  682047 main.go:141] libmachine: () Calling .GetMachineName
I0923 12:56:05.753795  682047 main.go:141] libmachine: (functional-741768) Calling .GetState
I0923 12:56:05.756028  682047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 12:56:05.756159  682047 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 12:56:05.771908  682047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42047
I0923 12:56:05.772410  682047 main.go:141] libmachine: () Calling .GetVersion
I0923 12:56:05.773026  682047 main.go:141] libmachine: Using API Version  1
I0923 12:56:05.773064  682047 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 12:56:05.773495  682047 main.go:141] libmachine: () Calling .GetMachineName
I0923 12:56:05.773698  682047 main.go:141] libmachine: (functional-741768) Calling .DriverName
I0923 12:56:05.773958  682047 ssh_runner.go:195] Run: systemctl --version
I0923 12:56:05.773994  682047 main.go:141] libmachine: (functional-741768) Calling .GetSSHHostname
I0923 12:56:05.776841  682047 main.go:141] libmachine: (functional-741768) DBG | domain functional-741768 has defined MAC address 52:54:00:f2:3d:95 in network mk-functional-741768
I0923 12:56:05.777279  682047 main.go:141] libmachine: (functional-741768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:3d:95", ip: ""} in network mk-functional-741768: {Iface:virbr1 ExpiryTime:2024-09-23 13:47:16 +0000 UTC Type:0 Mac:52:54:00:f2:3d:95 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:functional-741768 Clientid:01:52:54:00:f2:3d:95}
I0923 12:56:05.777311  682047 main.go:141] libmachine: (functional-741768) DBG | domain functional-741768 has defined IP address 192.168.39.190 and MAC address 52:54:00:f2:3d:95 in network mk-functional-741768
I0923 12:56:05.777407  682047 main.go:141] libmachine: (functional-741768) Calling .GetSSHPort
I0923 12:56:05.777602  682047 main.go:141] libmachine: (functional-741768) Calling .GetSSHKeyPath
I0923 12:56:05.777800  682047 main.go:141] libmachine: (functional-741768) Calling .GetSSHUsername
I0923 12:56:05.777977  682047 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/functional-741768/id_rsa Username:docker}
I0923 12:56:05.893378  682047 ssh_runner.go:195] Run: sudo crictl images --output json
I0923 12:56:05.989202  682047 main.go:141] libmachine: Making call to close driver server
I0923 12:56:05.989221  682047 main.go:141] libmachine: (functional-741768) Calling .Close
I0923 12:56:05.989520  682047 main.go:141] libmachine: Successfully made call to close driver server
I0923 12:56:05.989544  682047 main.go:141] libmachine: Making call to close connection to plugin binary
I0923 12:56:05.989571  682047 main.go:141] libmachine: Making call to close driver server
I0923 12:56:05.989580  682047 main.go:141] libmachine: (functional-741768) Calling .Close
I0923 12:56:05.989588  682047 main.go:141] libmachine: (functional-741768) DBG | Closing plugin on server side
I0923 12:56:05.989960  682047 main.go:141] libmachine: (functional-741768) DBG | Closing plugin on server side
I0923 12:56:05.990035  682047 main.go:141] libmachine: Successfully made call to close driver server
I0923 12:56:05.990103  682047 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-741768 image ls --format yaml --alsologtostderr:
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0761f00aca23d15a5068d31a0fc45bdae8bd567221d1c351b83e1362ccf4ef88
repoDigests:
- localhost/minikube-local-cache-test@sha256:2fb7a2c4aacb0b7289889c9849d41ec8396418a596b71b848a31c77ecece9066
repoTags:
- localhost/minikube-local-cache-test:functional-741768
size: "3330"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-741768
size: "4943877"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
- docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e
repoTags:
- docker.io/library/nginx:latest
size: "191853369"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-741768 image ls --format yaml --alsologtostderr:
I0923 12:55:56.962195  681915 out.go:345] Setting OutFile to fd 1 ...
I0923 12:55:56.962314  681915 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:55:56.962322  681915 out.go:358] Setting ErrFile to fd 2...
I0923 12:55:56.962326  681915 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:55:56.962527  681915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-662205/.minikube/bin
I0923 12:55:56.963337  681915 config.go:182] Loaded profile config "functional-741768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 12:55:56.963474  681915 config.go:182] Loaded profile config "functional-741768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 12:55:56.963945  681915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 12:55:56.963999  681915 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 12:55:56.980274  681915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44713
I0923 12:55:56.980837  681915 main.go:141] libmachine: () Calling .GetVersion
I0923 12:55:56.981547  681915 main.go:141] libmachine: Using API Version  1
I0923 12:55:56.981576  681915 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 12:55:56.982085  681915 main.go:141] libmachine: () Calling .GetMachineName
I0923 12:55:56.982317  681915 main.go:141] libmachine: (functional-741768) Calling .GetState
I0923 12:55:56.984862  681915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 12:55:56.984925  681915 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 12:55:57.001508  681915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39767
I0923 12:55:57.002100  681915 main.go:141] libmachine: () Calling .GetVersion
I0923 12:55:57.002727  681915 main.go:141] libmachine: Using API Version  1
I0923 12:55:57.002758  681915 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 12:55:57.003203  681915 main.go:141] libmachine: () Calling .GetMachineName
I0923 12:55:57.003524  681915 main.go:141] libmachine: (functional-741768) Calling .DriverName
I0923 12:55:57.003790  681915 ssh_runner.go:195] Run: systemctl --version
I0923 12:55:57.003835  681915 main.go:141] libmachine: (functional-741768) Calling .GetSSHHostname
I0923 12:55:57.006868  681915 main.go:141] libmachine: (functional-741768) DBG | domain functional-741768 has defined MAC address 52:54:00:f2:3d:95 in network mk-functional-741768
I0923 12:55:57.007436  681915 main.go:141] libmachine: (functional-741768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:3d:95", ip: ""} in network mk-functional-741768: {Iface:virbr1 ExpiryTime:2024-09-23 13:47:16 +0000 UTC Type:0 Mac:52:54:00:f2:3d:95 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:functional-741768 Clientid:01:52:54:00:f2:3d:95}
I0923 12:55:57.007484  681915 main.go:141] libmachine: (functional-741768) DBG | domain functional-741768 has defined IP address 192.168.39.190 and MAC address 52:54:00:f2:3d:95 in network mk-functional-741768
I0923 12:55:57.007707  681915 main.go:141] libmachine: (functional-741768) Calling .GetSSHPort
I0923 12:55:57.007932  681915 main.go:141] libmachine: (functional-741768) Calling .GetSSHKeyPath
I0923 12:55:57.008095  681915 main.go:141] libmachine: (functional-741768) Calling .GetSSHUsername
I0923 12:55:57.008229  681915 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/functional-741768/id_rsa Username:docker}
I0923 12:55:57.129145  681915 ssh_runner.go:195] Run: sudo crictl images --output json
I0923 12:55:57.204911  681915 main.go:141] libmachine: Making call to close driver server
I0923 12:55:57.204941  681915 main.go:141] libmachine: (functional-741768) Calling .Close
I0923 12:55:57.205292  681915 main.go:141] libmachine: Successfully made call to close driver server
I0923 12:55:57.205316  681915 main.go:141] libmachine: Making call to close connection to plugin binary
I0923 12:55:57.205335  681915 main.go:141] libmachine: Making call to close driver server
I0923 12:55:57.205346  681915 main.go:141] libmachine: (functional-741768) Calling .Close
I0923 12:55:57.205670  681915 main.go:141] libmachine: (functional-741768) DBG | Closing plugin on server side
I0923 12:55:57.205665  681915 main.go:141] libmachine: Successfully made call to close driver server
I0923 12:55:57.205712  681915 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (8.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-741768 ssh pgrep buildkitd: exit status 1 (211.243481ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 image build -t localhost/my-image:functional-741768 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-741768 image build -t localhost/my-image:functional-741768 testdata/build --alsologtostderr: (7.956582125s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-741768 image build -t localhost/my-image:functional-741768 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0981a1c8f9e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-741768
--> df7c4ec26d3
Successfully tagged localhost/my-image:functional-741768
df7c4ec26d36f15e32d6d90b48355ad987cd6fdd8560e01510cafa9bae3e880d
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-741768 image build -t localhost/my-image:functional-741768 testdata/build --alsologtostderr:
I0923 12:55:57.475612  681969 out.go:345] Setting OutFile to fd 1 ...
I0923 12:55:57.476165  681969 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:55:57.476185  681969 out.go:358] Setting ErrFile to fd 2...
I0923 12:55:57.476192  681969 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:55:57.476648  681969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-662205/.minikube/bin
I0923 12:55:57.477997  681969 config.go:182] Loaded profile config "functional-741768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 12:55:57.478708  681969 config.go:182] Loaded profile config "functional-741768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 12:55:57.479137  681969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 12:55:57.479183  681969 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 12:55:57.495208  681969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44953
I0923 12:55:57.495786  681969 main.go:141] libmachine: () Calling .GetVersion
I0923 12:55:57.496465  681969 main.go:141] libmachine: Using API Version  1
I0923 12:55:57.496501  681969 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 12:55:57.496857  681969 main.go:141] libmachine: () Calling .GetMachineName
I0923 12:55:57.497059  681969 main.go:141] libmachine: (functional-741768) Calling .GetState
I0923 12:55:57.499052  681969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 12:55:57.499113  681969 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 12:55:57.514927  681969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42509
I0923 12:55:57.515466  681969 main.go:141] libmachine: () Calling .GetVersion
I0923 12:55:57.516120  681969 main.go:141] libmachine: Using API Version  1
I0923 12:55:57.516155  681969 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 12:55:57.516558  681969 main.go:141] libmachine: () Calling .GetMachineName
I0923 12:55:57.516767  681969 main.go:141] libmachine: (functional-741768) Calling .DriverName
I0923 12:55:57.516974  681969 ssh_runner.go:195] Run: systemctl --version
I0923 12:55:57.517007  681969 main.go:141] libmachine: (functional-741768) Calling .GetSSHHostname
I0923 12:55:57.520210  681969 main.go:141] libmachine: (functional-741768) DBG | domain functional-741768 has defined MAC address 52:54:00:f2:3d:95 in network mk-functional-741768
I0923 12:55:57.520787  681969 main.go:141] libmachine: (functional-741768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:3d:95", ip: ""} in network mk-functional-741768: {Iface:virbr1 ExpiryTime:2024-09-23 13:47:16 +0000 UTC Type:0 Mac:52:54:00:f2:3d:95 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:functional-741768 Clientid:01:52:54:00:f2:3d:95}
I0923 12:55:57.520818  681969 main.go:141] libmachine: (functional-741768) DBG | domain functional-741768 has defined IP address 192.168.39.190 and MAC address 52:54:00:f2:3d:95 in network mk-functional-741768
I0923 12:55:57.520992  681969 main.go:141] libmachine: (functional-741768) Calling .GetSSHPort
I0923 12:55:57.521246  681969 main.go:141] libmachine: (functional-741768) Calling .GetSSHKeyPath
I0923 12:55:57.521430  681969 main.go:141] libmachine: (functional-741768) Calling .GetSSHUsername
I0923 12:55:57.521594  681969 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/functional-741768/id_rsa Username:docker}
I0923 12:55:57.620843  681969 build_images.go:161] Building image from path: /tmp/build.448570361.tar
I0923 12:55:57.620920  681969 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0923 12:55:57.632131  681969 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.448570361.tar
I0923 12:55:57.636693  681969 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.448570361.tar: stat -c "%s %y" /var/lib/minikube/build/build.448570361.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.448570361.tar': No such file or directory
I0923 12:55:57.636746  681969 ssh_runner.go:362] scp /tmp/build.448570361.tar --> /var/lib/minikube/build/build.448570361.tar (3072 bytes)
I0923 12:55:57.692565  681969 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.448570361
I0923 12:55:57.725622  681969 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.448570361 -xf /var/lib/minikube/build/build.448570361.tar
I0923 12:55:57.741204  681969 crio.go:315] Building image: /var/lib/minikube/build/build.448570361
I0923 12:55:57.741337  681969 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-741768 /var/lib/minikube/build/build.448570361 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0923 12:56:05.346773  681969 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-741768 /var/lib/minikube/build/build.448570361 --cgroup-manager=cgroupfs: (7.605381305s)
I0923 12:56:05.346852  681969 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.448570361
I0923 12:56:05.362861  681969 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.448570361.tar
I0923 12:56:05.373764  681969 build_images.go:217] Built localhost/my-image:functional-741768 from /tmp/build.448570361.tar
I0923 12:56:05.373814  681969 build_images.go:133] succeeded building to: functional-741768
I0923 12:56:05.373820  681969 build_images.go:134] failed building to: 
I0923 12:56:05.373866  681969 main.go:141] libmachine: Making call to close driver server
I0923 12:56:05.373881  681969 main.go:141] libmachine: (functional-741768) Calling .Close
I0923 12:56:05.374240  681969 main.go:141] libmachine: Successfully made call to close driver server
I0923 12:56:05.374263  681969 main.go:141] libmachine: (functional-741768) DBG | Closing plugin on server side
I0923 12:56:05.374265  681969 main.go:141] libmachine: Making call to close connection to plugin binary
I0923 12:56:05.374289  681969 main.go:141] libmachine: Making call to close driver server
I0923 12:56:05.374298  681969 main.go:141] libmachine: (functional-741768) Calling .Close
I0923 12:56:05.374568  681969 main.go:141] libmachine: Successfully made call to close driver server
I0923 12:56:05.374583  681969 main.go:141] libmachine: Making call to close connection to plugin binary
I0923 12:56:05.374606  681969 main.go:141] libmachine: (functional-741768) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (8.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.824788833s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-741768
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-741768 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-741768 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-z2gb4" [7cd6d21a-37fb-402d-a579-e26c1d6c57d3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-z2gb4" [7cd6d21a-37fb-402d-a579-e26c1d6c57d3] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.00446445s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 image load --daemon kicbase/echo-server:functional-741768 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-741768 image load --daemon kicbase/echo-server:functional-741768 --alsologtostderr: (3.068433627s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 image load --daemon kicbase/echo-server:functional-741768 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-741768
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 image load --daemon kicbase/echo-server:functional-741768 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 image ls
E0923 12:55:36.850318  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 image save kicbase/echo-server:functional-741768 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 image rm kicbase/echo-server:functional-741768 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-741768
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 image save --daemon kicbase/echo-server:functional-741768 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-741768 image save --daemon kicbase/echo-server:functional-741768 --alsologtostderr: (3.110287659s)
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-741768
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 service list -o json
functional_test.go:1494: Took "289.936842ms" to run "out/minikube-linux-amd64 -p functional-741768 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.190:31760
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.190:31760
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-741768 /tmp/TestFunctionalparallelMountCmdany-port2918450785/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727096142809382412" to /tmp/TestFunctionalparallelMountCmdany-port2918450785/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727096142809382412" to /tmp/TestFunctionalparallelMountCmdany-port2918450785/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727096142809382412" to /tmp/TestFunctionalparallelMountCmdany-port2918450785/001/test-1727096142809382412
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-741768 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (243.165074ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 12:55:43.052887  669447 retry.go:31] will retry after 599.557777ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 23 12:55 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 23 12:55 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 23 12:55 test-1727096142809382412
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh cat /mount-9p/test-1727096142809382412
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-741768 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b175977f-8cc8-4816-9254-3f334367cbc6] Pending
helpers_test.go:344: "busybox-mount" [b175977f-8cc8-4816-9254-3f334367cbc6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b175977f-8cc8-4816-9254-3f334367cbc6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b175977f-8cc8-4816-9254-3f334367cbc6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004450386s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-741768 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-741768 /tmp/TestFunctionalparallelMountCmdany-port2918450785/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.81s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "294.720219ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "47.627265ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "309.265171ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "65.257575ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-741768 /tmp/TestFunctionalparallelMountCmdspecific-port3156538825/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-741768 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (234.683568ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 12:55:51.850269  669447 retry.go:31] will retry after 472.859729ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-741768 /tmp/TestFunctionalparallelMountCmdspecific-port3156538825/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-741768 ssh "sudo umount -f /mount-9p": exit status 1 (254.732818ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-741768 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-741768 /tmp/TestFunctionalparallelMountCmdspecific-port3156538825/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-741768 /tmp/TestFunctionalparallelMountCmdVerifyCleanup910862873/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-741768 /tmp/TestFunctionalparallelMountCmdVerifyCleanup910862873/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-741768 /tmp/TestFunctionalparallelMountCmdVerifyCleanup910862873/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-741768 ssh "findmnt -T" /mount1: exit status 1 (272.27309ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 12:55:53.740530  669447 retry.go:31] will retry after 585.942977ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-741768 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-741768 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-741768 /tmp/TestFunctionalparallelMountCmdVerifyCleanup910862873/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-741768 /tmp/TestFunctionalparallelMountCmdVerifyCleanup910862873/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-741768 /tmp/TestFunctionalparallelMountCmdVerifyCleanup910862873/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-741768
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-741768
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-741768
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (201.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-097312 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-097312 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m20.646664771s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (201.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-097312 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-097312 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-097312 -- rollout status deployment/busybox: (4.648180646s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-097312 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-097312 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-097312 -- exec busybox-7dff88458-4rksx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-097312 -- exec busybox-7dff88458-tx8b9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-097312 -- exec busybox-7dff88458-wz97n -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-097312 -- exec busybox-7dff88458-4rksx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-097312 -- exec busybox-7dff88458-tx8b9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-097312 -- exec busybox-7dff88458-wz97n -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-097312 -- exec busybox-7dff88458-4rksx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-097312 -- exec busybox-7dff88458-tx8b9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-097312 -- exec busybox-7dff88458-wz97n -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-097312 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-097312 -- exec busybox-7dff88458-4rksx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-097312 -- exec busybox-7dff88458-4rksx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-097312 -- exec busybox-7dff88458-tx8b9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-097312 -- exec busybox-7dff88458-tx8b9 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-097312 -- exec busybox-7dff88458-wz97n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-097312 -- exec busybox-7dff88458-wz97n -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (61.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-097312 -v=7 --alsologtostderr
E0923 13:00:29.178409  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:00:29.184907  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:00:29.196379  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:00:29.217948  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:00:29.259491  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:00:29.341050  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:00:29.502655  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:00:29.825373  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:00:30.467121  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:00:31.749091  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:00:34.311124  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:00:36.850374  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:00:39.433207  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:00:49.675003  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-097312 -v=7 --alsologtostderr: (1m0.534111033s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (61.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-097312 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 cp testdata/cp-test.txt ha-097312:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 cp ha-097312:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3809348295/001/cp-test_ha-097312.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 cp ha-097312:/home/docker/cp-test.txt ha-097312-m02:/home/docker/cp-test_ha-097312_ha-097312-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312-m02 "sudo cat /home/docker/cp-test_ha-097312_ha-097312-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 cp ha-097312:/home/docker/cp-test.txt ha-097312-m03:/home/docker/cp-test_ha-097312_ha-097312-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312-m03 "sudo cat /home/docker/cp-test_ha-097312_ha-097312-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 cp ha-097312:/home/docker/cp-test.txt ha-097312-m04:/home/docker/cp-test_ha-097312_ha-097312-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312-m04 "sudo cat /home/docker/cp-test_ha-097312_ha-097312-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 cp testdata/cp-test.txt ha-097312-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 cp ha-097312-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3809348295/001/cp-test_ha-097312-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 cp ha-097312-m02:/home/docker/cp-test.txt ha-097312:/home/docker/cp-test_ha-097312-m02_ha-097312.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312 "sudo cat /home/docker/cp-test_ha-097312-m02_ha-097312.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 cp ha-097312-m02:/home/docker/cp-test.txt ha-097312-m03:/home/docker/cp-test_ha-097312-m02_ha-097312-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312-m03 "sudo cat /home/docker/cp-test_ha-097312-m02_ha-097312-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 cp ha-097312-m02:/home/docker/cp-test.txt ha-097312-m04:/home/docker/cp-test_ha-097312-m02_ha-097312-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312-m04 "sudo cat /home/docker/cp-test_ha-097312-m02_ha-097312-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 cp testdata/cp-test.txt ha-097312-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 cp ha-097312-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3809348295/001/cp-test_ha-097312-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 cp ha-097312-m03:/home/docker/cp-test.txt ha-097312:/home/docker/cp-test_ha-097312-m03_ha-097312.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312 "sudo cat /home/docker/cp-test_ha-097312-m03_ha-097312.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 cp ha-097312-m03:/home/docker/cp-test.txt ha-097312-m02:/home/docker/cp-test_ha-097312-m03_ha-097312-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312-m02 "sudo cat /home/docker/cp-test_ha-097312-m03_ha-097312-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 cp ha-097312-m03:/home/docker/cp-test.txt ha-097312-m04:/home/docker/cp-test_ha-097312-m03_ha-097312-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312-m04 "sudo cat /home/docker/cp-test_ha-097312-m03_ha-097312-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 cp testdata/cp-test.txt ha-097312-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 cp ha-097312-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3809348295/001/cp-test_ha-097312-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 cp ha-097312-m04:/home/docker/cp-test.txt ha-097312:/home/docker/cp-test_ha-097312-m04_ha-097312.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312 "sudo cat /home/docker/cp-test_ha-097312-m04_ha-097312.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 cp ha-097312-m04:/home/docker/cp-test.txt ha-097312-m02:/home/docker/cp-test_ha-097312-m04_ha-097312-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312-m02 "sudo cat /home/docker/cp-test_ha-097312-m04_ha-097312-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 cp ha-097312-m04:/home/docker/cp-test.txt ha-097312-m03:/home/docker/cp-test_ha-097312-m04_ha-097312-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 ssh -n ha-097312-m03 "sudo cat /home/docker/cp-test_ha-097312-m04_ha-097312-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.030016229s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 node delete m03 -v=7 --alsologtostderr
E0923 13:10:29.178244  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:10:36.850251  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-097312 node delete m03 -v=7 --alsologtostderr: (15.912428108s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (324.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-097312 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0923 13:15:29.179209  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:15:36.850608  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:16:52.245730  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-097312 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m23.897286035s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (324.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-097312 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-097312 --control-plane -v=7 --alsologtostderr: (1m19.182011198s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-097312 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (53.8s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-877454 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0923 13:20:29.178099  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:20:36.850252  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-877454 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (53.797714472s)
--- PASS: TestJSONOutput/start/Command (53.80s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-877454 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-877454 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (23.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-877454 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-877454 --output=json --user=testUser: (23.3561813s)
--- PASS: TestJSONOutput/stop/Command (23.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-940694 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-940694 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (66.149445ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0660e10b-e5ed-4805-85de-9e621fe262e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-940694] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ac66e6dc-1411-4a67-a445-28389904d36f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19690"}}
	{"specversion":"1.0","id":"5fa954e8-1967-468c-b18c-d90acb1ddbb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"04d5e6b1-2ec5-492c-ab87-29e1bf928928","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19690-662205/kubeconfig"}}
	{"specversion":"1.0","id":"b1706618-fd04-4fd7-8fd5-04bbb0c13de0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-662205/.minikube"}}
	{"specversion":"1.0","id":"22b31935-8a34-43fa-926d-6bbf41524386","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8ba5907b-959d-49cb-a778-aa72a3967c3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0f73cd87-da54-4d44-a0ec-761eaf7423ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-940694" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-940694
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (92.16s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-149711 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-149711 --driver=kvm2  --container-runtime=crio: (43.738729457s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-161316 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-161316 --driver=kvm2  --container-runtime=crio: (45.3611672s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-149711
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-161316
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-161316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-161316
helpers_test.go:175: Cleaning up "first-149711" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-149711
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-149711: (1.004634194s)
--- PASS: TestMinikubeProfile (92.16s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (30.77s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-353784 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-353784 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.768378652s)
--- PASS: TestMountStart/serial/StartWithMountFirst (30.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-353784 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-353784 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.24s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-367064 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-367064 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.238119056s)
E0923 13:23:39.920413  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMountStart/serial/StartWithMountSecond (24.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-367064 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-367064 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-353784 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-367064 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-367064 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-367064
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-367064: (1.276762973s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.58s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-367064
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-367064: (21.5753143s)
--- PASS: TestMountStart/serial/RestartStopped (22.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-367064 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-367064 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (110.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-851928 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0923 13:25:29.178582  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:25:36.850185  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-851928 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m50.533672681s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (110.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851928 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851928 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-851928 -- rollout status deployment/busybox: (4.700441639s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851928 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851928 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851928 -- exec busybox-7dff88458-7xvmf -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851928 -- exec busybox-7dff88458-gl4bk -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851928 -- exec busybox-7dff88458-7xvmf -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851928 -- exec busybox-7dff88458-gl4bk -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851928 -- exec busybox-7dff88458-7xvmf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851928 -- exec busybox-7dff88458-gl4bk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.18s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851928 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851928 -- exec busybox-7dff88458-7xvmf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851928 -- exec busybox-7dff88458-7xvmf -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851928 -- exec busybox-7dff88458-gl4bk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851928 -- exec busybox-7dff88458-gl4bk -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-851928 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-851928 -v 3 --alsologtostderr: (51.309164435s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.90s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-851928 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 cp testdata/cp-test.txt multinode-851928:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 ssh -n multinode-851928 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 cp multinode-851928:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1094698981/001/cp-test_multinode-851928.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 ssh -n multinode-851928 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 cp multinode-851928:/home/docker/cp-test.txt multinode-851928-m02:/home/docker/cp-test_multinode-851928_multinode-851928-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 ssh -n multinode-851928 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 ssh -n multinode-851928-m02 "sudo cat /home/docker/cp-test_multinode-851928_multinode-851928-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 cp multinode-851928:/home/docker/cp-test.txt multinode-851928-m03:/home/docker/cp-test_multinode-851928_multinode-851928-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 ssh -n multinode-851928 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 ssh -n multinode-851928-m03 "sudo cat /home/docker/cp-test_multinode-851928_multinode-851928-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 cp testdata/cp-test.txt multinode-851928-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 ssh -n multinode-851928-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 cp multinode-851928-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1094698981/001/cp-test_multinode-851928-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 ssh -n multinode-851928-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 cp multinode-851928-m02:/home/docker/cp-test.txt multinode-851928:/home/docker/cp-test_multinode-851928-m02_multinode-851928.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 ssh -n multinode-851928-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 ssh -n multinode-851928 "sudo cat /home/docker/cp-test_multinode-851928-m02_multinode-851928.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 cp multinode-851928-m02:/home/docker/cp-test.txt multinode-851928-m03:/home/docker/cp-test_multinode-851928-m02_multinode-851928-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 ssh -n multinode-851928-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 ssh -n multinode-851928-m03 "sudo cat /home/docker/cp-test_multinode-851928-m02_multinode-851928-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 cp testdata/cp-test.txt multinode-851928-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 ssh -n multinode-851928-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 cp multinode-851928-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1094698981/001/cp-test_multinode-851928-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 ssh -n multinode-851928-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 cp multinode-851928-m03:/home/docker/cp-test.txt multinode-851928:/home/docker/cp-test_multinode-851928-m03_multinode-851928.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 ssh -n multinode-851928-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 ssh -n multinode-851928 "sudo cat /home/docker/cp-test_multinode-851928-m03_multinode-851928.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 cp multinode-851928-m03:/home/docker/cp-test.txt multinode-851928-m02:/home/docker/cp-test_multinode-851928-m03_multinode-851928-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 ssh -n multinode-851928-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 ssh -n multinode-851928-m02 "sudo cat /home/docker/cp-test_multinode-851928-m03_multinode-851928-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.59s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-851928 node stop m03: (1.55106463s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-851928 status: exit status 7 (441.696372ms)

                                                
                                                
-- stdout --
	multinode-851928
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-851928-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-851928-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-851928 status --alsologtostderr: exit status 7 (450.930612ms)

                                                
                                                
-- stdout --
	multinode-851928
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-851928-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-851928-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 13:27:07.766100  699452 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:27:07.766233  699452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:27:07.766239  699452 out.go:358] Setting ErrFile to fd 2...
	I0923 13:27:07.766245  699452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:27:07.766557  699452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-662205/.minikube/bin
	I0923 13:27:07.766840  699452 out.go:352] Setting JSON to false
	I0923 13:27:07.766893  699452 mustload.go:65] Loading cluster: multinode-851928
	I0923 13:27:07.766967  699452 notify.go:220] Checking for updates...
	I0923 13:27:07.767363  699452 config.go:182] Loaded profile config "multinode-851928": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:27:07.767388  699452 status.go:174] checking status of multinode-851928 ...
	I0923 13:27:07.767874  699452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:27:07.767938  699452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:27:07.787981  699452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44541
	I0923 13:27:07.788552  699452 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:27:07.789184  699452 main.go:141] libmachine: Using API Version  1
	I0923 13:27:07.789220  699452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:27:07.789708  699452 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:27:07.789983  699452 main.go:141] libmachine: (multinode-851928) Calling .GetState
	I0923 13:27:07.791823  699452 status.go:364] multinode-851928 host status = "Running" (err=<nil>)
	I0923 13:27:07.791846  699452 host.go:66] Checking if "multinode-851928" exists ...
	I0923 13:27:07.792172  699452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:27:07.792226  699452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:27:07.808490  699452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33151
	I0923 13:27:07.808952  699452 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:27:07.809552  699452 main.go:141] libmachine: Using API Version  1
	I0923 13:27:07.809570  699452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:27:07.809895  699452 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:27:07.810151  699452 main.go:141] libmachine: (multinode-851928) Calling .GetIP
	I0923 13:27:07.813303  699452 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:27:07.813754  699452 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:27:07.813781  699452 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:27:07.813957  699452 host.go:66] Checking if "multinode-851928" exists ...
	I0923 13:27:07.814253  699452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:27:07.814297  699452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:27:07.831124  699452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42275
	I0923 13:27:07.831738  699452 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:27:07.832412  699452 main.go:141] libmachine: Using API Version  1
	I0923 13:27:07.832437  699452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:27:07.832817  699452 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:27:07.833049  699452 main.go:141] libmachine: (multinode-851928) Calling .DriverName
	I0923 13:27:07.833275  699452 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 13:27:07.833319  699452 main.go:141] libmachine: (multinode-851928) Calling .GetSSHHostname
	I0923 13:27:07.836674  699452 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:27:07.837200  699452 main.go:141] libmachine: (multinode-851928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:14:99", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:24:21 +0000 UTC Type:0 Mac:52:54:00:93:14:99 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-851928 Clientid:01:52:54:00:93:14:99}
	I0923 13:27:07.837232  699452 main.go:141] libmachine: (multinode-851928) DBG | domain multinode-851928 has defined IP address 192.168.39.168 and MAC address 52:54:00:93:14:99 in network mk-multinode-851928
	I0923 13:27:07.837425  699452 main.go:141] libmachine: (multinode-851928) Calling .GetSSHPort
	I0923 13:27:07.837669  699452 main.go:141] libmachine: (multinode-851928) Calling .GetSSHKeyPath
	I0923 13:27:07.837856  699452 main.go:141] libmachine: (multinode-851928) Calling .GetSSHUsername
	I0923 13:27:07.838012  699452 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/multinode-851928/id_rsa Username:docker}
	I0923 13:27:07.917941  699452 ssh_runner.go:195] Run: systemctl --version
	I0923 13:27:07.924744  699452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:27:07.943297  699452 kubeconfig.go:125] found "multinode-851928" server: "https://192.168.39.168:8443"
	I0923 13:27:07.943344  699452 api_server.go:166] Checking apiserver status ...
	I0923 13:27:07.943383  699452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:27:07.960723  699452 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1052/cgroup
	W0923 13:27:07.972601  699452 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1052/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0923 13:27:07.972678  699452 ssh_runner.go:195] Run: ls
	I0923 13:27:07.977754  699452 api_server.go:253] Checking apiserver healthz at https://192.168.39.168:8443/healthz ...
	I0923 13:27:07.982382  699452 api_server.go:279] https://192.168.39.168:8443/healthz returned 200:
	ok
	I0923 13:27:07.982418  699452 status.go:456] multinode-851928 apiserver status = Running (err=<nil>)
	I0923 13:27:07.982430  699452 status.go:176] multinode-851928 status: &{Name:multinode-851928 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 13:27:07.982452  699452 status.go:174] checking status of multinode-851928-m02 ...
	I0923 13:27:07.982803  699452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:27:07.982842  699452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:27:07.999504  699452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34249
	I0923 13:27:08.000102  699452 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:27:08.000706  699452 main.go:141] libmachine: Using API Version  1
	I0923 13:27:08.000737  699452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:27:08.001105  699452 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:27:08.001307  699452 main.go:141] libmachine: (multinode-851928-m02) Calling .GetState
	I0923 13:27:08.003297  699452 status.go:364] multinode-851928-m02 host status = "Running" (err=<nil>)
	I0923 13:27:08.003319  699452 host.go:66] Checking if "multinode-851928-m02" exists ...
	I0923 13:27:08.003628  699452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:27:08.003671  699452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:27:08.021303  699452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44645
	I0923 13:27:08.021911  699452 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:27:08.022529  699452 main.go:141] libmachine: Using API Version  1
	I0923 13:27:08.022562  699452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:27:08.022939  699452 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:27:08.023132  699452 main.go:141] libmachine: (multinode-851928-m02) Calling .GetIP
	I0923 13:27:08.026078  699452 main.go:141] libmachine: (multinode-851928-m02) DBG | domain multinode-851928-m02 has defined MAC address 52:54:00:a2:dd:c4 in network mk-multinode-851928
	I0923 13:27:08.026478  699452 main.go:141] libmachine: (multinode-851928-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:dd:c4", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:25:23 +0000 UTC Type:0 Mac:52:54:00:a2:dd:c4 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:multinode-851928-m02 Clientid:01:52:54:00:a2:dd:c4}
	I0923 13:27:08.026512  699452 main.go:141] libmachine: (multinode-851928-m02) DBG | domain multinode-851928-m02 has defined IP address 192.168.39.25 and MAC address 52:54:00:a2:dd:c4 in network mk-multinode-851928
	I0923 13:27:08.026634  699452 host.go:66] Checking if "multinode-851928-m02" exists ...
	I0923 13:27:08.027006  699452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:27:08.027062  699452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:27:08.043956  699452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46259
	I0923 13:27:08.044551  699452 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:27:08.045113  699452 main.go:141] libmachine: Using API Version  1
	I0923 13:27:08.045141  699452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:27:08.045501  699452 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:27:08.045749  699452 main.go:141] libmachine: (multinode-851928-m02) Calling .DriverName
	I0923 13:27:08.045998  699452 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 13:27:08.046023  699452 main.go:141] libmachine: (multinode-851928-m02) Calling .GetSSHHostname
	I0923 13:27:08.049144  699452 main.go:141] libmachine: (multinode-851928-m02) DBG | domain multinode-851928-m02 has defined MAC address 52:54:00:a2:dd:c4 in network mk-multinode-851928
	I0923 13:27:08.049572  699452 main.go:141] libmachine: (multinode-851928-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:dd:c4", ip: ""} in network mk-multinode-851928: {Iface:virbr1 ExpiryTime:2024-09-23 14:25:23 +0000 UTC Type:0 Mac:52:54:00:a2:dd:c4 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:multinode-851928-m02 Clientid:01:52:54:00:a2:dd:c4}
	I0923 13:27:08.049606  699452 main.go:141] libmachine: (multinode-851928-m02) DBG | domain multinode-851928-m02 has defined IP address 192.168.39.25 and MAC address 52:54:00:a2:dd:c4 in network mk-multinode-851928
	I0923 13:27:08.049784  699452 main.go:141] libmachine: (multinode-851928-m02) Calling .GetSSHPort
	I0923 13:27:08.050103  699452 main.go:141] libmachine: (multinode-851928-m02) Calling .GetSSHKeyPath
	I0923 13:27:08.050312  699452 main.go:141] libmachine: (multinode-851928-m02) Calling .GetSSHUsername
	I0923 13:27:08.050464  699452 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-662205/.minikube/machines/multinode-851928-m02/id_rsa Username:docker}
	I0923 13:27:08.133391  699452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:27:08.148558  699452 status.go:176] multinode-851928-m02 status: &{Name:multinode-851928-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0923 13:27:08.148600  699452 status.go:174] checking status of multinode-851928-m03 ...
	I0923 13:27:08.148937  699452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 13:27:08.148987  699452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 13:27:08.165420  699452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40563
	I0923 13:27:08.165974  699452 main.go:141] libmachine: () Calling .GetVersion
	I0923 13:27:08.166581  699452 main.go:141] libmachine: Using API Version  1
	I0923 13:27:08.166607  699452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 13:27:08.166984  699452 main.go:141] libmachine: () Calling .GetMachineName
	I0923 13:27:08.167185  699452 main.go:141] libmachine: (multinode-851928-m03) Calling .GetState
	I0923 13:27:08.168925  699452 status.go:364] multinode-851928-m03 host status = "Stopped" (err=<nil>)
	I0923 13:27:08.168946  699452 status.go:377] host is not running, skipping remaining checks
	I0923 13:27:08.168952  699452 status.go:176] multinode-851928-m03 status: &{Name:multinode-851928-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.44s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-851928 node start m03 -v=7 --alsologtostderr: (39.859652225s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.52s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-851928 node delete m03: (1.794634646s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (189.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-851928 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-851928 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m8.515066853s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851928 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (189.05s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-851928
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-851928-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-851928-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (67.093771ms)

                                                
                                                
-- stdout --
	* [multinode-851928-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-662205/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-662205/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-851928-m02' is duplicated with machine name 'multinode-851928-m02' in profile 'multinode-851928'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-851928-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-851928-m03 --driver=kvm2  --container-runtime=crio: (43.617828271s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-851928
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-851928: exit status 80 (224.257303ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-851928 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-851928-m03 already exists in multinode-851928-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-851928-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.94s)

                                                
                                    
x
+
TestScheduledStopUnix (111.7s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-604658 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-604658 --memory=2048 --driver=kvm2  --container-runtime=crio: (40.056615764s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-604658 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-604658 -n scheduled-stop-604658
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-604658 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0923 13:43:12.886481  669447 retry.go:31] will retry after 135.874µs: open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/scheduled-stop-604658/pid: no such file or directory
I0923 13:43:12.887645  669447 retry.go:31] will retry after 189.908µs: open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/scheduled-stop-604658/pid: no such file or directory
I0923 13:43:12.888776  669447 retry.go:31] will retry after 118.329µs: open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/scheduled-stop-604658/pid: no such file or directory
I0923 13:43:12.889900  669447 retry.go:31] will retry after 484.515µs: open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/scheduled-stop-604658/pid: no such file or directory
I0923 13:43:12.891051  669447 retry.go:31] will retry after 691.483µs: open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/scheduled-stop-604658/pid: no such file or directory
I0923 13:43:12.892182  669447 retry.go:31] will retry after 394.155µs: open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/scheduled-stop-604658/pid: no such file or directory
I0923 13:43:12.893319  669447 retry.go:31] will retry after 1.209763ms: open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/scheduled-stop-604658/pid: no such file or directory
I0923 13:43:12.895508  669447 retry.go:31] will retry after 1.351301ms: open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/scheduled-stop-604658/pid: no such file or directory
I0923 13:43:12.897729  669447 retry.go:31] will retry after 2.522041ms: open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/scheduled-stop-604658/pid: no such file or directory
I0923 13:43:12.900943  669447 retry.go:31] will retry after 5.749323ms: open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/scheduled-stop-604658/pid: no such file or directory
I0923 13:43:12.907171  669447 retry.go:31] will retry after 6.027196ms: open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/scheduled-stop-604658/pid: no such file or directory
I0923 13:43:12.913429  669447 retry.go:31] will retry after 5.575226ms: open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/scheduled-stop-604658/pid: no such file or directory
I0923 13:43:12.919669  669447 retry.go:31] will retry after 17.580175ms: open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/scheduled-stop-604658/pid: no such file or directory
I0923 13:43:12.937917  669447 retry.go:31] will retry after 21.730723ms: open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/scheduled-stop-604658/pid: no such file or directory
I0923 13:43:12.960197  669447 retry.go:31] will retry after 31.531875ms: open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/scheduled-stop-604658/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-604658 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-604658 -n scheduled-stop-604658
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-604658
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-604658 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-604658
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-604658: exit status 7 (68.525235ms)

                                                
                                                
-- stdout --
	scheduled-stop-604658
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-604658 -n scheduled-stop-604658
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-604658 -n scheduled-stop-604658: exit status 7 (66.697979ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-604658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-604658
--- PASS: TestScheduledStopUnix (111.70s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (199.49s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2085005415 start -p running-upgrade-575751 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0923 13:45:29.178290  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/functional-741768/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:45:36.849731  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/addons-052630/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2085005415 start -p running-upgrade-575751 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m8.133614702s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-575751 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-575751 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m7.825224878s)
helpers_test.go:175: Cleaning up "running-upgrade-575751" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-575751
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-575751: (1.242444832s)
--- PASS: TestRunningBinaryUpgrade (199.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-509500 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-509500 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (88.332582ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-509500] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-662205/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-662205/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (99.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-509500 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-509500 --driver=kvm2  --container-runtime=crio: (1m38.841465231s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-509500 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (99.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (130.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.731144560 start -p stopped-upgrade-780772 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.731144560 start -p stopped-upgrade-780772 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m17.377100597s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.731144560 -p stopped-upgrade-780772 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.731144560 -p stopped-upgrade-780772 stop: (1.393910817s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-780772 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-780772 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (51.462188899s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (130.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (40.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-509500 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-509500 --no-kubernetes --driver=kvm2  --container-runtime=crio: (39.576102761s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-509500 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-509500 status -o json: exit status 2 (255.893148ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-509500","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-509500
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-509500: (1.141412168s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (40.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-509500 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-509500 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.296699047s)
--- PASS: TestNoKubernetes/serial/Start (28.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-509500 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-509500 "sudo systemctl is-active --quiet service kubelet": exit status 1 (205.586066ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (24.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.297298804s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (9.642079165s)
--- PASS: TestNoKubernetes/serial/ProfileList (24.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-509500
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-509500: (3.165104454s)
--- PASS: TestNoKubernetes/serial/Stop (3.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (23.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-509500 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-509500 --driver=kvm2  --container-runtime=crio: (23.479043472s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (23.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-780772
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-488767 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-488767 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (105.497519ms)

                                                
                                                
-- stdout --
	* [false-488767] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-662205/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-662205/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 13:48:00.184183  710045 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:48:00.184458  710045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:48:00.184468  710045 out.go:358] Setting ErrFile to fd 2...
	I0923 13:48:00.184472  710045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:48:00.184649  710045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-662205/.minikube/bin
	I0923 13:48:00.185231  710045 out.go:352] Setting JSON to false
	I0923 13:48:00.186253  710045 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":12623,"bootTime":1727086657,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 13:48:00.186373  710045 start.go:139] virtualization: kvm guest
	I0923 13:48:00.188628  710045 out.go:177] * [false-488767] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 13:48:00.189977  710045 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 13:48:00.189976  710045 notify.go:220] Checking for updates...
	I0923 13:48:00.191324  710045 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:48:00.192626  710045 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-662205/kubeconfig
	I0923 13:48:00.193996  710045 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-662205/.minikube
	I0923 13:48:00.195275  710045 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 13:48:00.196484  710045 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 13:48:00.198215  710045 config.go:182] Loaded profile config "NoKubernetes-509500": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0923 13:48:00.198355  710045 config.go:182] Loaded profile config "force-systemd-env-640763": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:48:00.198440  710045 config.go:182] Loaded profile config "kubernetes-upgrade-678282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0923 13:48:00.198538  710045 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:48:00.236398  710045 out.go:177] * Using the kvm2 driver based on user configuration
	I0923 13:48:00.237776  710045 start.go:297] selected driver: kvm2
	I0923 13:48:00.237800  710045 start.go:901] validating driver "kvm2" against <nil>
	I0923 13:48:00.237817  710045 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 13:48:00.240253  710045 out.go:201] 
	W0923 13:48:00.241599  710045 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0923 13:48:00.243103  710045 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-488767 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-488767

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-488767

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-488767

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-488767

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-488767

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-488767

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-488767

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-488767

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-488767

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-488767

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-488767

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-488767" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-488767" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-488767

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488767"

                                                
                                                
----------------------- debugLogs end: false-488767 [took: 2.937239153s] --------------------------------
helpers_test.go:175: Cleaning up "false-488767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-488767
--- PASS: TestNetworkPlugins/group/false (3.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-509500 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-509500 "sudo systemctl is-active --quiet service kubelet": exit status 1 (223.661102ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestPause/serial/Start (109.83s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-429220 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
I0923 13:48:07.792350  669447 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0923 13:48:09.970711  669447 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0923 13:48:10.002633  669447 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0923 13:48:10.002676  669447 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0923 13:48:10.002741  669447 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0923 13:48:10.002776  669447 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1309809378/002/docker-machine-driver-kvm2
I0923 13:48:10.045680  669447 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1309809378/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc000711760 gz:0xc000711768 tar:0xc000711710 tar.bz2:0xc000711720 tar.gz:0xc000711730 tar.xz:0xc000711740 tar.zst:0xc000711750 tbz2:0xc000711720 tgz:0xc000711730 txz:0xc000711740 tzst:0xc000711750 xz:0xc000711770 zip:0xc000711780 zst:0xc000711778] Getters:map[file:0xc0009fb230 http:0xc0004cbbd0 https:0xc0004cbc20] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0923 13:48:10.045732  669447 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1309809378/002/docker-machine-driver-kvm2
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-429220 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m49.832581168s)
--- PASS: TestPause/serial/Start (109.83s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (46.74s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-429220 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-429220 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.713757967s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (46.74s)

                                                
                                    
x
+
TestPause/serial/Pause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-429220 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.80s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-429220 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-429220 --output=json --layout=cluster: exit status 2 (275.466039ms)

                                                
                                                
-- stdout --
	{"Name":"pause-429220","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-429220","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.89s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-429220 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.89s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.23s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-429220 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-429220 --alsologtostderr -v=5: (1.226339367s)
--- PASS: TestPause/serial/PauseAgain (1.23s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.5s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-429220 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-429220 --alsologtostderr -v=5: (1.501943841s)
--- PASS: TestPause/serial/DeletePaused (1.50s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.69s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (87.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-488767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-488767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m27.824993092s)
--- PASS: TestNetworkPlugins/group/auto/Start (87.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (69.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-488767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-488767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m9.071607896s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (69.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (84.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-488767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-488767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m24.220765602s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (84.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-488767 "pgrep -a kubelet"
I0923 13:52:10.909952  669447 config.go:182] Loaded profile config "enable-default-cni-488767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-488767 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vwzdq" [edb51d9d-2612-4e8b-a215-662d1ce9fe66] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vwzdq" [edb51d9d-2612-4e8b-a215-662d1ce9fe66] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005895833s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-488767 "pgrep -a kubelet"
I0923 13:52:17.149166  669447 config.go:182] Loaded profile config "auto-488767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-488767 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nk67h" [6c6f935f-3104-4f97-816b-bfd114c3782e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nk67h" [6c6f935f-3104-4f97-816b-bfd114c3782e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.005285778s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (21.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-488767 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-488767 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.213762853s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 13:52:37.505691  669447 retry.go:31] will retry after 752.327295ms: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-488767 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context enable-default-cni-488767 exec deployment/netcat -- nslookup kubernetes.default: (5.204103189s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (21.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-488767 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-488767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-488767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-hwwlt" [16247c22-6f14-4f49-8136-93685b6e7947] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005026656s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-488767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-488767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (77.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-488767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-488767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m17.994541077s)
--- PASS: TestNetworkPlugins/group/flannel/Start (77.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-488767 "pgrep -a kubelet"
I0923 13:52:47.086530  669447 config.go:182] Loaded profile config "kindnet-488767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-488767 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gvq5w" [d0d902a6-a58a-4863-9943-9c96eadec331] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gvq5w" [d0d902a6-a58a-4863-9943-9c96eadec331] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004822854s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-488767 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-488767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-488767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (99.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-488767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-488767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m39.976610589s)
--- PASS: TestNetworkPlugins/group/calico/Start (99.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (109.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-488767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-488767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m49.265316105s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (109.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (144.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-488767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-488767 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (2m24.753711785s)
--- PASS: TestNetworkPlugins/group/bridge/Start (144.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-w95tk" [313fcc05-20f6-4e31-ae38-f28f1f6392ab] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004475506s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-488767 "pgrep -a kubelet"
I0923 13:54:09.483456  669447 config.go:182] Loaded profile config "flannel-488767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-488767 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5dcvd" [84fdc0ce-39c5-4163-8dc7-1a9b6003ae09] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5dcvd" [84fdc0ce-39c5-4163-8dc7-1a9b6003ae09] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.006684819s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-488767 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-488767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-488767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-nj2hk" [fa281525-5a1b-4b4e-a623-31fb43bc8609] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006290426s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-488767 "pgrep -a kubelet"
I0923 13:54:46.629141  669447 config.go:182] Loaded profile config "calico-488767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-488767 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context calico-488767 replace --force -f testdata/netcat-deployment.yaml: (1.488500932s)
I0923 13:54:48.125132  669447 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I0923 13:54:48.902551  669447 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8vj4d" [434cbea7-67f4-4df9-80bf-218bd48e7a0c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8vj4d" [434cbea7-67f4-4df9-80bf-218bd48e7a0c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004458267s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-488767 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-488767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-488767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-488767 "pgrep -a kubelet"
I0923 13:55:03.352707  669447 config.go:182] Loaded profile config "custom-flannel-488767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-488767 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mxnb6" [a17d755b-f24c-422e-bba5-542d1f6673b8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mxnb6" [a17d755b-f24c-422e-bba5-542d1f6673b8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.00576153s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-488767 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-488767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-488767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-488767 "pgrep -a kubelet"
I0923 13:55:41.476989  669447 config.go:182] Loaded profile config "bridge-488767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-488767 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tcqnj" [3e59d4c5-f9bf-40f0-98f2-4c1407d9f5ac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tcqnj" [3e59d4c5-f9bf-40f0-98f2-4c1407d9f5ac] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003965453s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-488767 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-488767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-488767 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)
E0923 14:25:03.596856  669447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-662205/.minikube/profiles/custom-flannel-488767/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    

Test skip (36/274)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.1/cached-images 0
15 TestDownloadOnly/v1.31.1/binaries 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
37 TestAddons/parallel/Olm 0
47 TestDockerFlags 0
50 TestDockerEnvContainerd 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
104 TestFunctional/parallel/DockerEnv 0
105 TestFunctional/parallel/PodmanEnv 0
120 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
121 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
122 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
123 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
126 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
153 TestGvisorAddon 0
175 TestImageBuild 0
202 TestKicCustomNetwork 0
203 TestKicExistingNetwork 0
204 TestKicCustomSubnet 0
205 TestKicStaticIP 0
237 TestChangeNoneUser 0
240 TestScheduledStopWindows 0
242 TestSkaffold 0
244 TestInsufficientStorage 0
248 TestMissingContainerUpgrade 0
263 TestNetworkPlugins/group/kubenet 2.93
271 TestNetworkPlugins/group/cilium 3.62
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:817: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-488767 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-488767

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-488767

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-488767

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-488767

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-488767

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-488767

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-488767

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-488767

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-488767

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-488767

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-488767

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-488767" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-488767" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-488767

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488767"

                                                
                                                
----------------------- debugLogs end: kubenet-488767 [took: 2.785271334s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-488767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-488767
--- SKIP: TestNetworkPlugins/group/kubenet (2.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-488767 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-488767

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-488767

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-488767

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-488767

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-488767

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-488767

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-488767

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-488767

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-488767

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-488767

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-488767

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-488767" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-488767

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-488767

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-488767

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-488767

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-488767" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-488767" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-488767

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-488767" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488767"

                                                
                                                
----------------------- debugLogs end: cilium-488767 [took: 3.457459996s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-488767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-488767
--- SKIP: TestNetworkPlugins/group/cilium (3.62s)

                                                
                                    
Copied to clipboard